CENTER FOR THE SAFETY AND RELIABILITY OF AUTONOMOUS SYSTEMS (SARAS)
SARAS is an interdisciplinary organization focusing on autonomous system safety and reliability. We aim to help all stakeholders design, build, operate and certify safe and reliable autonomous systems. We help governments, their agencies, manufacturers and other researchers in creating frameworks under which reliable and safe autonomous systems are designed and operated.
The incorporation of autonomous systems into everyday life — from the perspective of safety and reliability — will be either painful, planned or lucky.
We aim to help it be planned.
Our vision is necessarily futuristic. We see a world where many of the tasks and activities we undertake today are more effectively and efficiently undertaken by autonomous systems. We see global productivity increasing as a result. We see an environment where emerging ideas about how autonomy can help society more broadly quickly become realities.
Our vision is that the decisions about whether an autonomous system is safe and reliable is not a barrier to its successful implementation.
Machines Learning to be Reliable
Machine learning is not a new concept—machines are very good at quickly going through data and can fairly easily identify patterns. If a machine identifies a pattern in failure data, it may have identified a causal relationship. That is, it may have found out the cause for a particular form of failure. But in the field of reliability engineering, we already have many models of failure mechanisms. These are better than "patterns" as they are based on science. Can we combine the benefits of machine learning with our "human" understanding of why things fail? Can we motivate a machine to work out how to better 'operate itself' to be more reliable? These are questions we are attempting to answer.
Autonomous Systems Control Software - Continuous Risk Assessment
How do we assure that an autonomous system does what it is supposed to do? Autonomous systems are controlled by software, which is supposed to always make “right” decisions. Probabilistic risk assessment is one way to assess and verify that a system is safe enough to operate. Software behaves unlike physical system components. Its assessment therefore requires adapted methods. Aim of the ongoing research is to develop a method for assessing the impact of software control systems on the risk level of operation of autonomous systems
+ Staff Publications
Dr. Paulo Tabuada
- Correctness Guarantees for the Composition of Lane Keeping and Adaptive Cruise Control
- Control Barrier Function Based Quadratic Programs for Safety Critical Systems
- Decomposing Controller Synthesis for Safety Specifications
Dr. Jason Speyer
- Optimal Planning of Autonomous Air Vehicle Battle Management
- A Methodology for Reducing the Admissible Hypotheses for GPS Integer Ambiguity Resolution.
Dr. Izhak Rubin
+ Government Reports
+ Books and Articles
- Marchant, G. & Lindor, R. The Coming Collision Between Autonomous Vehicles and the Liability System
- Villasnor, J. Driverless Cars: Issues and Guiding Principles for Legislation