Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262

The use of machine learning (ML) is on the rise in many sectors of software development, and automotive software development is no different. In particular, Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS) are two areas where ML plays a significant role. In automotive development, safety is a critical objective, and the emergence of standards such as ISO 26262 has helped focus industry practices to address safety in a systematic and consistent way. Unfortunately, these standards were not designed to accommodate technologies such as ML or the type of functionality that is provided by an ADS and this has created a conflict between the need to innovate and the need to improve safety. In this report, we take steps to address this conflict by doing a detailed assessment and adaption of ISO 26262 for ML, specifically in the context of supervised learning. First we analyze the key factors that are the source of the conflict. Then we assess each software development process requirement (Part 6 of ISO 26262) for applicability to ML. Where there are gaps, we propose new requirements to address the gaps. Finally we discuss the application of this adapted and extended variant of Part 6 to ML development scenarios.

[1]  Daniel Kroening,et al.  Testing Deep Neural Networks , 2018, ArXiv.

[2]  Stevan Harnad The Symbol Grounding Problem , 1999, ArXiv.

[3]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[4]  Sanjit A. Seshia,et al.  Towards Verified Artificial Intelligence , 2016, ArXiv.

[5]  Philip Koopman,et al.  Challenges in Autonomous Vehicle Testing and Validation , 2016 .

[6]  Matthew Wicker,et al.  Feature-Guided Black-Box Safety Testing of Deep Neural Networks , 2017, TACAS.

[7]  Gerald E. Peterson Foundation for neural network verification and validation , 1993, Defense, Security, and Sensing.

[8]  Julian D. Olden,et al.  Illuminating the “black box”: a randomization approach for understanding variable contributions in artificial neural networks , 2002 .

[9]  Patrik Feth,et al.  A Conceptual Safety Supervisor Definition and Evaluation Framework for Autonomous Systems , 2017, SAFECOMP.

[10]  Matthew B. Blaschko,et al.  Learning equivariant structured output SVM regressors , 2011, 2011 International Conference on Computer Vision.

[11]  Alex Fridman,et al.  Arguing Machines: Perception-Control System Redundancy and Edge Case Discovery in Real-World Autonomous Driving , 2017, ArXiv.

[12]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[13]  Yan Liu,et al.  Application of Neural Networks in High Assurance Systems: A Survey , 2010, Applications of Neural Networks in High Assurance Systems.

[14]  Sanjit A. Seshia,et al.  Compositional Falsification of Cyber-Physical Systems with Machine Learning Components , 2017, NFM.

[15]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[16]  Itsuo Takanami,et al.  A fault-value injection approach for multiple-weight-fault tolerance of MNNs , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[17]  R. Ratcliff,et al.  CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Comparing Exemplar- and Rule- Based Theories of Categorization , 2022 .

[18]  Emil Pitkin,et al.  Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation , 2013, 1309.6392.

[19]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[20]  Willem F. G. Haselager,et al.  Do Robot Performance and Behavioral Style a ↵ ect Human Trust ? A Multi-Method Approach , 2014 .

[21]  Hermann Winner,et al.  Herausforderungen in der Absicherung von Fahrerassistenzsystemen bei der Benutzung maschinell gelernter und lernender Algorithmen , 2017 .

[22]  Finale Doshi-Velez,et al.  Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models , 2016, ArXiv.

[23]  Max Welling,et al.  Group Equivariant Convolutional Networks , 2016, ICML.

[24]  Jaime F. Fisac,et al.  A General Safety Framework for Learning-Based Control in Uncertain Robotic Systems , 2017, IEEE Transactions on Automatic Control.

[25]  S. Seshia Compositional Verification without Compositional Specification for Learning-Based Systems , 2017 .

[26]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[27]  Anuj K. Pradhan,et al.  Literature Review of Behavioral Adaptations to Advanced Driver Assistance Systems , 2016 .

[28]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[29]  Simon Burton,et al.  Making the Case for Safety of Machine Learning in Highly Automated Driving , 2017, SAFECOMP Workshops.

[30]  Andrea Vedaldi,et al.  Salient Deconvolutional Networks , 2016, ECCV.

[31]  Gérard Bloch,et al.  Incorporating prior knowledge in support vector machines for classification: A review , 2008, Neurocomputing.

[32]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[33]  Kush R. Varshney,et al.  Engineering safety in machine learning , 2016, 2016 Information Theory and Applications Workshop (ITA).

[34]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[35]  Alex A. Freitas,et al.  A survey of hierarchical classification across different application domains , 2010, Data Mining and Knowledge Discovery.

[36]  Laura L. Pullum,et al.  Guidance for the Verification and Validation of Neural Networks (Emerging Technologies) , 2007 .

[37]  Trevor Darrell,et al.  Attentive Explanations: Justifying Decisions and Pointing to the Evidence , 2016, ArXiv.

[38]  Junfeng Yang,et al.  DeepXplore , 2019, Commun. ACM.

[39]  Philip Koopman,et al.  Toward a Framework for Highly Automated Vehicle Safety Validation , 2018 .

[40]  David A. Clifton,et al.  A review of novelty detection , 2014, Signal Process..

[41]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[42]  Moacir P. Ponti Jr. Combining Classifiers: From the Creation of Ensembles to the Decision Fusion , 2011, 2011 24th SIBGRAPI Conference on Graphics, Patterns, and Images Tutorials.

[43]  Shangbo Zhou,et al.  X-TREPAN: a multi class regression and adapted extraction of comprehensible decision tree in artificial neural networks , 2015, ArXiv.

[44]  David M. Rodvold A software development process model for artificial neural networks in critical applications , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[45]  Frédo Durand,et al.  Where Should Saliency Models Look Next? , 2016, ECCV.

[46]  Shengcai Liao,et al.  Robust Multi-resolution Pedestrian Detection in Traffic Scenes , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[47]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[48]  Steven Lake Waslander,et al.  Joint 3D Proposal Generation and Object Detection from View Aggregation , 2017, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[49]  H. M. Collins,et al.  Tacit and Explicit Knowledge , 2010 .

[50]  Bart Baesens,et al.  Using Rule Extraction to Improve the Comprehensibility of Predictive Models , 2006 .

[51]  John D. Schierman,et al.  A Component-Based Simplex Architecture for High-Assurance Cyber-Physical Systems , 2017, 2017 17th International Conference on Application of Concurrency to System Design (ACSD).

[52]  Jim Austin,et al.  Developing artificial neural networks for safety critical systems , 2006, Neural Computing and Applications.

[53]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[54]  Daniel S. Weld,et al.  Intelligible Artificial Intelligence , 2018, ArXiv.

[55]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[56]  Bernt Schiele,et al.  Pictorial structures revisited: People detection and articulated pose estimation , 2009, CVPR.

[57]  Luis Perez,et al.  The Effectiveness of Data Augmentation in Image Classification using Deep Learning , 2017, ArXiv.

[58]  Mohan M. Trivedi,et al.  Active learning for on-road vehicle detection: a comparative study , 2014, Machine Vision and Applications.

[59]  Albert Gordo,et al.  Transparent Model Distillation , 2018, ArXiv.

[60]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[61]  Konstantinos N. Plataniotis,et al.  IMPROVED EXPLAINABILITY OF CAPSULE NETWORKS: RELEVANCE PATH BY AGREEMENT , 2018, 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP).

[62]  A. Jefferson Offutt,et al.  Introduction to Software Testing , 2008 .

[63]  Rick Salay,et al.  An Analysis of ISO 26262: Machine Learning and Safety in Automotive Software , 2018 .

[64]  Guy Van den Broeck,et al.  A Semantic Loss Function for Deep Learning with Symbolic Knowledge , 2017, ICML.

[65]  Stephan J. Garbin,et al.  Harmonic Networks: Deep Translation and Rotation Equivariance , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[66]  Andrea Vedaldi,et al.  Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[67]  Daniel Watzenig,et al.  Functional Safety of Automated Driving Systems: Does ISO 26262 Meet the Challenges? , 2017 .

[68]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[69]  Sriram K. Rajamani,et al.  Debugging Machine Learning Tasks , 2016, ArXiv.

[70]  Eran Yahav,et al.  Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples , 2017, ICML.

[71]  Antonio Torralba,et al.  HOGgles: Visualizing Object Detection Features , 2013, 2013 IEEE International Conference on Computer Vision.

[72]  Trevor Darrell,et al.  Generating Visual Explanations , 2016, ECCV.

[73]  Yoshua Bengio,et al.  Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.

[74]  Joachim Diederich,et al.  The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks , 1998, IEEE Trans. Neural Networks.

[75]  Mark D. McDonnell,et al.  Understanding Data Augmentation for Classification: When to Warp? , 2016, 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA).

[76]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[77]  Eric Horvitz,et al.  On Human Intellect and Machine Failures: Troubleshooting Integrative Machine Learning Systems , 2016, AAAI.

[78]  Been Kim,et al.  Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.

[79]  S. Bhattacharyya,et al.  Certification considerations for adaptive systems , 2015, 2015 International Conference on Unmanned Aircraft Systems (ICUAS).

[80]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[81]  A. Barto,et al.  Lyapunov Design for Safe Reinforcement Learning Control , 2002 .

[82]  Amnon Shashua,et al.  On the Sample Complexity of End-to-end Training vs. Semantic Abstraction Training , 2016, ArXiv.

[83]  Bernd Spanfelner,et al.  Challenges in applying the ISO 26262 for driver assistance systems , 2012 .

[84]  Lui Sha,et al.  Using Simplicity to Control Complexity , 2001, IEEE Softw..

[85]  Jan Kautz,et al.  Unsupervised Image-to-Image Translation Networks , 2017, NIPS.

[86]  Dick de Waard,et al.  Behavioural impacts of Advanced Driver Assistance Systems–an overview , 2019, European Journal of Transport and Infrastructure Research.