Risk Assessment of Autonomous System Control Software

Autonomous systems such as (autonomous vehicles and ships) will change the characteristics of traffic and transportation. Several organizations (public and private) develop and test such systems. Regulatory bodies require proof that these systems are safe to be operated in public. A set of tools that has previously been used in other industries is the probabilistic risk assessment (PRA). PRA is successfully applied to nuclear, chemical, and oil and gas installations to ensure their operation keeps public risks below acceptable levels.

The brain of autonomous systems are their control systems - specifically designed for each applications. Autonomous control systems executes four tasks:

  1. collect information (such as the data from sensors);

  2. orient themselves based on the observed information;

  3. decide on actions based on the current situation or state of knowledge; and

  4. implement these actions, through control signals to actuators and other subsystems.

This process is analogous to the famous 'OODA' loop (observe, orient, decide then act) developed by United States Air Force Colonel John Boyd in the 1960s and 1970s. The OODA loop has been used extensively in many fields beyond military strategy, including operations, business, and law.

The control system is the cornerstone of any autonomous system. To essentially determine if an autonomus system is safe, we focus on the control system. It becomes important to ensure that the control system is assessed in terms of risk and possible contribution to accidents, otherwise you cannot make that same assessment for the autonomous system more broadly.

Control systems are mainly made up of software. Software behaves differently than hardware components in respect to failure patterns. Software does not fail randomly or through ageing effects. Software failures are within the software from the beginning of operation or introduced during operation through updates. Software failures are caused by insufficient specifications, erroneous specifications, or errors introduced during programming and implementation. It can be concluded that software faults are already in the software, which makes them deterministic. Through testing and verification procedures it is attempted to remove these errors that might lead to faults. PRA can be used to quantify the uncertainty associated with the remaining faults in the software.

Ongoing SARAS research aims at synthesizing and enhancing existing methods to analyze software and its implicit risk contribution. These methods will assist system designers and operators verify autonomous systems’ safety. The method we are developing will be embedded within a PRA software to make it both practical and applicable.

The research is carried out in cooperation with the Norwegian Centre of Excellence for Autonomous Marine Operations and Systems (AMOS) of the Norwegian University of Science and Technology (NTNU). AMOS intends to initially apply the method to underwater robots and autonomous ships, but the applications for autonomous systems (such as autonomous vehicles) are clear.