Safety Assurance Concepts for Automated Driving Systems

Automated driving systems (ADSs) for road vehicles are being developed that can perform the entire dynamic driving task without a human driver in the loop. However, current regulatory frameworks for assuring vehicle safety may restrict the deployment of ADSs that can use machine learning to modify their functionality while in service. A review was undertaken to identify and assess key initiatives and research relevant to the safety assurance of adaptive safety-critical systems that use machine learning, and to highlight assurance concepts that could benefit from further research. The primary objective was to produce findings and recommendations that can inform policy and regulatory reform relating to ADS safety assurance. Due to the almost infinite number and combination of scenarios that an ADS could encounter, the review found much support for concepts that involve the use of simulation data as virtual evidence of safety compliance, with suggestions of a need to assure simulation tools and models. Real-world behavioural competency testing was also commonly proposed, although noting this concept has its limitations. The concept of whole-of-life assurance was identified, supported by various versions of dynamic runtime monitoring, verification and validation. Concerns regarding Artificial Intelligence (AI) robustness were raised, particularly regarding adversarial inputs and unmodelled scenarios that are essentially unknown unknowns. Further, the concept of explainable AI was highlighted as having potential to provide evidence from an ADS that could support safety assurance and regulatory compliance. While each of the identified assurance concepts should be considered when developing ADS safety assurance framework options, it is recommended that further research on each concept should be progressed.