Interpretable Model-Agnostic Plausibility Verification for 2D Object Detectors Using Domain-Invariant Concept Bottleneck Models

Despite the unchallenged performance, deep neural network (DNN) based object detectors (OD) for computer vision have inherent, hard-to-verify limitations like brittleness, opacity, and unknown behavior on corner cases. Therefore, operation-time safety measures like monitors will be inevitable—even mandatory—for use in safety-critical applications like automated driving (AD). This paper presents an approach for plausibilization of OD detections using a small model-agnostic, robust, interpretable, and domain-invariant image classification model. The safety requirements of interpretability and robustness are achieved by using a small concept bottleneck model (CBM), a DNN intercepted by interpretable intermediate outputs. The domain-invariance is necessary for robustness against common domain shifts, and for cheap adaptation to diverse AD settings. While vanilla CBMs are here shown to fail in case of domain shifts like natural perturbations, we substantially improve the CBM via combination with trainable color-invariance filters developed for domain adaptation. Furthermore, the monitor that utilizes CBMs with trainable color-invarince filters is successfully applied in an AD OD setting for detection of hallucinated objects with zero-shot domain adaptation, and to false positive detection with fewshot adaptation, proving this to be a promising approach for error monitoring.

[1]  Jan Křetínský,et al.  Runtime Monitoring for Out-of-Distribution Detection in Object Detection Neural Networks , 2022, FM.

[2]  Johann Marius Zöllner,et al.  Plausibility Verification For 3D Object Detectors Using Energy-Based Optimization , 2022, ECCV Workshops.

[3]  Mihaela C. Stoian,et al.  ROAD-R: the autonomous driving dataset with logical requirements , 2022, Machine Learning.

[4]  James Y. Zou,et al.  Post-hoc Concept Bottleneck Models , 2022, ICLR.

[5]  P. Bizarro,et al.  ConceptDistil: Model-Agnostic Distillation of Concept Explanations , 2022, ArXiv.

[6]  Gesina Schwalbe Concept Embedding Analysis: A Review , 2022, ArXiv.

[7]  Yoshihide Sawada,et al.  Concept Bottleneck Model With Additional Unsupervised Concepts , 2022, IEEE Access.

[8]  Amit K. Roy-Chowdhury,et al.  ADC: Adversarial attacks against object Detection that evade Context consistency checks , 2021, 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).

[9]  Qi Zhao,et al.  TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).

[10]  Michael Milford,et al.  Zero-Shot Day-Night Domain Adaptation with a Physics Prior , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[11]  Alagan Anpalagan,et al.  Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues , 2021, Array.

[12]  Gesina Schwalbe,et al.  A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts , 2021, Data Mining and Knowledge Discovery.

[13]  Pedro Saleiro,et al.  Weakly Supervised Multi-task Learning for Concept-based Explainability , 2021, ArXiv.

[14]  Mohammed S. Khesbak Depth Camera and Laser Sensors Plausibility Evaluation for Small Size Obstacle Detection , 2021, 2021 18th International Multi-Conference on Systems, Signals & Devices (SSD).

[15]  Ilya Sutskever,et al.  Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.

[16]  Di Feng,et al.  A Review and Comparative Study on Probabilistic Object Detection in Autonomous Driving , 2020, IEEE Transactions on Intelligent Transportation Systems.

[17]  Roland Siegwart,et al.  Out-of-Distribution Detection for Automotive Perception , 2020, 2021 IEEE International Intelligent Transportation Systems Conference (ITSC).

[18]  Florian Geissler,et al.  A Plausibility-Based Fault Detection Method for High-Level Fusion Perception Systems , 2020, IEEE Open Journal of Intelligent Transportation Systems.

[19]  Oscar Déniz,et al.  Deep autoencoder for false positive reduction in handgun detection , 2020, Neural Computing and Applications.

[20]  Lydia Gauerhof,et al.  Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications , 2020, SAFECOMP Workshops.

[21]  Been Kim,et al.  Concept Bottleneck Models , 2020, ICML.

[22]  Thomas Ponn,et al.  Identification and Explanation of Challenging Conditions for Camera-Based Object Detection of Automated Vehicles , 2020, Sensors.

[23]  Peter Schlicht,et al.  Unsupervised Temporal Consistency Metric for Video Segmentation in Highly-Automated Driving , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[24]  Muhammad Naseer Bajwa,et al.  Explaining AI-based Decision Support Systems using Concept Localization Maps , 2020, ICONIP.

[25]  Sercan Ö. Arik,et al.  On Completeness-aware Concept-Based Explanations in Deep Neural Networks , 2019, NeurIPS.

[26]  Tianfu Wu,et al.  Towards Interpretable Object Detection by Unfolding Latent Structures , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[27]  Bernt Schiele,et al.  Interpretability Beyond Classification Output: Semantic Bottleneck Networks , 2019, ArXiv.

[28]  Alexander S. Ecker,et al.  Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming , 2019, ArXiv.

[29]  Jinqiao Wang,et al.  A Review on Object Detection Based on Deep Convolutional Neural Networks for Autonomous Driving , 2019, 2019 Chinese Control And Decision Conference (CCDC).

[30]  Hyuk-Jae Lee,et al.  Gaussian YOLOv3: An Accurate and Fast Object Detector Using Localization Uncertainty for Autonomous Driving , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[31]  Alois Knoll,et al.  Uncertainty Estimation for Deep Neural Object Detectors in Safety-Critical Applications , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[32]  Matthias Bethge,et al.  ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.

[33]  Yuning Jiang,et al.  Unified Perceptual Parsing for Scene Understanding , 2018, ECCV.

[34]  S. Roth,et al.  Lightweight Probabilistic Deep Networks , 2018, CVPR.

[35]  Mei Wang,et al.  Deep Visual Domain Adaptation: A Survey , 2018, Neurocomputing.

[36]  Martin Wattenberg,et al.  Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.

[37]  Matthew Johnson-Roberson,et al.  Failing to Learn: Autonomously Identifying Perception Failures for Self-Driving Cars , 2017, IEEE Robotics and Automation Letters.

[38]  Kilian Q. Weinberger,et al.  On Calibration of Modern Neural Networks , 2017, ICML.

[39]  Bolei Zhou,et al.  Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[40]  Alex Kendall,et al.  What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.

[41]  Forrest N. Iandola,et al.  SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[42]  Kevin Gimpel,et al.  A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.

[43]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[44]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[45]  Pietro Perona,et al.  Microsoft COCO: Common Objects in Context , 2014, ECCV.

[46]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[47]  Torsten Bertram,et al.  Object existence probability fusion using dempster-shafer theory in a high-level sensor data fusion architecture , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).

[48]  Pietro Perona,et al.  Caltech-UCSD Birds 200 , 2010 .

[49]  M. Petró‐Turza,et al.  The International Organization for Standardization. , 2003 .

[50]  Arnold W. M. Smeulders,et al.  Color Invariance , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[51]  Dawn Song,et al.  Scaling Out-of-Distribution Detection for Real-World Settings , 2022, ICML.

[52]  Gereon Weiss,et al.  Benchmarking Uncertainty Estimation Methods for Deep Learning With Safety-Related Metrics , 2020, SafeAI@AAAI.