Reevaluating the Safety Impact of Inherent Interpretability on Deep Neural Networks for Pedestrian Detection

AI-based perception is a key factor towards the automation of driving systems. A conclusive safety argumentation must provide evidence for safe functioning. Existing safety standards are not suitable to deal with non-interpretable deep neural networks (DNN) learning from unstructured data. This work provides a proof of concept for a comprehensible requirements analysis based on an interpretable DNN. Recent work on interpretability motivates to rethink software considerations of safety standards. We describe the application of established considerations to DNNs by integrating interpretability and identifying artifacts. DNN artifacts result from a meaningful decomposition of requirements and adaptions of the perception pipeline. To prove our concept, we propose an interpretable method for the center, scale and prototype prediction (CSPP) that learns an explicitly structured latent space. The interpretability-based requirements analysis of CSPP is completed by tracing artifacts and source code to decomposed requirements. Finally, qualitative post-hoc evaluations provide evidence for the fulfillment of defined requirements for the latent space.

[1]  Cynthia Rudin,et al.  This Looks Like That: Deep Learning for Interpretable Image Recognition , 2018 .

[2]  Been Kim,et al.  Sanity Checks for Saliency Maps , 2018, NeurIPS.

[3]  Sebastian Sudholt,et al.  Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks , 2020, SAFECOMP Workshops.

[4]  Yuning Jiang,et al.  FoveaBox: Beyound Anchor-Based Object Detection , 2019, IEEE Transactions on Image Processing.

[5]  Wei Liu,et al.  High-Level Semantic Feature Detection: A New Perspective for Pedestrian Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Dariu Gavrila,et al.  EuroCity Persons: A Novel Benchmark for Person Detection in Traffic Scenes , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Kamaledin Ghiasi-Shirazi,et al.  Efficient implementation of a generalized convolutional neural networks based on weighted euclidean distance , 2017, 2017 7th International Conference on Computer and Knowledge Engineering (ICCKE).

[8]  Michael E. Tipping,et al.  Probabilistic Principal Component Analysis , 1999 .

[9]  Pietro Perona,et al.  Pedestrian Detection: An Evaluation of the State of the Art , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  Andreas Geiger,et al.  Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art , 2017, Found. Trends Comput. Graph. Vis..

[11]  Hao Chen,et al.  FCOS: Fully Convolutional One-Stage Object Detection , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[12]  Yuning Jiang,et al.  Repulsion Loss: Detecting Pedestrians in a Crowd , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[13]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[14]  Vincent Aravantinos Traceability of Deep Neural Networks , 2018, ArXiv.

[15]  Hoyt Lougee,et al.  SOFTWARE CONSIDERATIONS IN AIRBORNE SYSTEMS AND EQUIPMENT CERTIFICATION , 2001 .

[16]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Qi Tian,et al.  CenterNet: Keypoint Triplets for Object Detection , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[18]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.

[19]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Ivica Crnkovic,et al.  Engineering AI Systems: A Research Agenda , 2020, Advances in Systems Analysis, Software Engineering, and High Performance Computing.

[21]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[22]  Been Kim,et al.  Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.

[23]  Kamaledin Ghiasi-Shirazi,et al.  Generalizing the Convolution Operator in Convolutional Neural Networks , 2017, Neural Processing Letters.

[24]  Xingyi Zhou,et al.  Objects as Points , 2019, ArXiv.

[25]  Caroline Schießl,et al.  Towards behaviour based testing to understand the black box of autonomous cars , 2020 .

[26]  Cynthia Rudin,et al.  Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.

[27]  Bernt Schiele,et al.  CityPersons: A Diverse Dataset for Pedestrian Detection , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[28]  Harri Valpola,et al.  Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.