SARAS is an interdisciplinary organization focusing on autonomous system safety and reliability.  We aim to help all stakeholders design, build, operate and certify safe and reliable autonomous systems. We help governments, their agencies, manufacturers and other researchers in creating frameworks under which reliable and safe autonomous systems are designed and operated. 

The incorporation of autonomous systems into everyday life  from the perspective of safety and reliability  will be either painful, planned or lucky.
We aim to help it be planned.


Our vision is necessarily futuristic. We see a world where many of the tasks and activities we undertake today are more effectively and efficiently undertaken by autonomous systems. We see global productivity increasing as a result. We see an environment where emerging ideas about how autonomy can help society more broadly quickly become realities.

Our vision is that the decisions about whether an autonomous system is safe and reliable is not a barrier to its successful implementation.


Machines Learning to be Reliable

Machine learning is not a new concept—machines are very good at quickly going through data and can fairly easily identify patterns. If a machine identifies a pattern in failure data, it may have identified a causal relationship. That is, it may have found out the cause for a particular form of failure. But in the field of reliability engineering, we already have many models of failure mechanisms. These are better than "patterns" as they are based on science. Can we combine the benefits of machine learning with our "human" understanding of why things fail? Can we motivate a machine to work out how to better 'operate itself' to be more reliable? These are questions we are attempting to answer. 

Read More →

Autonomous Systems Control Software - Continuous Risk Assessment

How do we assure that an autonomous system does what it is supposed to do? Autonomous systems are controlled by software, which is supposed to always make “right” decisions. Probabilistic risk assessment is one way to assess and verify that a system is safe enough to operate. Software behaves unlike physical system components. Its assessment therefore requires adapted methods. Aim of the ongoing research is to develop a method for assessing the impact of software control systems on the risk level of operation of autonomous systems

Read More →