Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications

Deep neural networks (DNNs) are widely considered as a key technology for perception in high and full driving automation. However, their safety assessment remains challenging, as they exhibit specific insufficiencies: black-box nature, simple performance issues, incorrect internal logic, and instability. These are not sufficiently considered in existing standards on safety argumentation. In this paper, we systematically establish and break down safety requirements to argue the sufficient absence of risk arising from such insufficiencies. We furthermore argue why diverse evidence is highly relevant for a safety argument involving DNNs, and classify available sources of evidence. Together, this yields a generic approach and template to thoroughly respect DNN specifics within a safety argumentation structure. Its applicability is shown by providing examples of methods and measures following an example use case based on pedestrian detection.

[1]  Simon Burton,et al.  Confidence Arguments for Evidence of Performance in Machine Learning for Highly Automated Driving Functions , 2019, SAFECOMP Workshops.

[2]  Alexandru Paul Condurache,et al.  GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples , 2020, ESANN.

[3]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[4]  Rick Salay,et al.  An Analysis of ISO 26262: Using Machine Learning Safely in Automotive Software , 2017, ArXiv.

[5]  Daniel Kroening,et al.  Concolic Testing for Deep Neural Networks , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).

[6]  Peter Schlicht,et al.  The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[7]  Gereon Weiss,et al.  Benchmarking Uncertainty Estimation Methods for Deep Learning With Safety-Related Metrics , 2020, SafeAI@AAAI.

[8]  Gerd Ascheid,et al.  Efficient On-Line Error Detection and Mitigation for Deep Neural Network Accelerators , 2018, SAFECOMP.

[9]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[10]  Matthias Bethge,et al.  ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.

[11]  Simon Burton,et al.  Structuring Validation Targets of a Machine Learning Function Applied to Automated Driving , 2018, SAFECOMP.

[12]  R. Srikant,et al.  Principled Detection of Out-of-Distribution Examples in Neural Networks , 2017, ArXiv.

[13]  Nancy G. Leveson,et al.  Engineering a Safer World: Systems Thinking Applied to Safety , 2012 .

[14]  Markus Maurer,et al.  Ontology based Scene Creation for the Development of Automated Vehicles , 2017, 2018 IEEE Intelligent Vehicles Symposium (IV).

[15]  Lydia Gauerhof,et al.  Reverse Variational Autoencoder for Visual Attribute Manipulation and Anomaly Detection , 2020, 2020 IEEE Winter Conference on Applications of Computer Vision (WACV).

[16]  Tameru Hailesilassie,et al.  Rule Extraction Algorithm for Deep Neural Networks: A Review , 2016, ArXiv.

[17]  Timo Sämann,et al.  Strategy to Increase the Safety of a DNN-based Perception for HAD Systems , 2020, ArXiv.

[18]  Rick Salay,et al.  An Analysis of ISO 26262: Machine Learning and Safety in Automotive Software , 2018 .

[19]  Alex Kendall,et al.  What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.

[20]  Martin Schels,et al.  A Survey on Methods for the Safety Assurance of Machine Learning Based Systems , 2020 .

[21]  Sebastian Sudholt,et al.  Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks , 2020, SAFECOMP Workshops.