Out-of-Distribution Detection for Automotive Perception

Neural networks (NNs) are widely used for object classification in autonomous driving. However, NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data. A mechanism to detect OOD samples is important for safety-critical applications, such as automotive perception, to trigger a safe fallback mode. NNs often rely on softmax normalization for confidence estimation, which can lead to high confidences being assigned to OOD samples, thus hindering the detection of failures. This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference. The latter property is especially important in automotive applications with limited computational resources and real-time constraints. Our proposed approach outperforms state-of-the-art methods on real-world automotive datasets.

[1]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[2]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[3]  Xinge Zhu,et al.  SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds , 2020, ECCV.

[4]  Roland Siegwart,et al.  Learning Common and Transferable Feature Representations for Multi-Modal Data , 2020, 2020 IEEE Intelligent Vehicles Symposium (IV).

[5]  Roland Siegwart,et al.  Fishyscapes: A Benchmark for Safe Semantic Segmentation in Autonomous Driving , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).

[6]  Mahmood Fathy,et al.  Adversarially Learned One-Class Classifier for Novelty Detection , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Kibok Lee,et al.  A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.

[8]  Graham W. Taylor,et al.  Learning Confidence for Out-of-Distribution Detection in Neural Networks , 2018, ArXiv.

[9]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  J. Underwood,et al.  Towards reliable perception for Unmanned Ground Vehicles in challenging conditions , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Thomas G. Dietterich,et al.  Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.

[12]  Renaud Dubé,et al.  Redundant Perception and State Estimation for Reliable Autonomous Racing , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[13]  Davide Scaramuzza,et al.  A General Framework for Uncertainty Estimation in Deep Learning , 2020, IEEE Robotics and Automation Letters.

[14]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Brendan T. O'Connor,et al.  Posterior calibration and exploratory analysis for natural language processing models , 2015, EMNLP.

[16]  Katrien van Driessen,et al.  A Fast Algorithm for the Minimum Covariance Determinant Estimator , 1999, Technometrics.

[17]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[18]  Chuan Sheng Foo,et al.  Efficient GAN-Based Anomaly Detection , 2018, ArXiv.

[19]  Wray L. Buntine,et al.  Hands-On Bayesian Neural Networks—A Tutorial for Deep Learning Users , 2020, IEEE Computational Intelligence Magazine.

[20]  Leonidas J. Guibas,et al.  Frustum PointNets for 3D Object Detection from RGB-D Data , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[21]  Richard S. Zemel,et al.  Prototypical Networks for Few-shot Learning , 2017, NIPS.

[22]  Qiang Xu,et al.  nuScenes: A Multimodal Dataset for Autonomous Driving , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Yinda Zhang,et al.  LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop , 2015, ArXiv.

[24]  L. Deng,et al.  Calibration of Confidence Measures in Speech Recognition , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[25]  Sungzoon Cho,et al.  Variational Autoencoder based Anomaly Detection using Reconstruction Probability , 2015 .

[26]  Abel Gawel,et al.  Learning Densities in Feature Space for Reliable Segmentation of Indoor Scenes , 2019, IEEE Robotics and Automation Letters.

[27]  Ron Kohavi,et al.  The Case against Accuracy Estimation for Comparing Induction Algorithms , 1998, ICML.

[28]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[29]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[30]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[31]  Wolfram Burgard,et al.  The limits and potentials of deep learning for robotics , 2018, Int. J. Robotics Res..

[32]  P. Rousseeuw,et al.  Unmasking Multivariate Outliers and Leverage Points , 1990 .

[33]  Lorenz Wellhausen,et al.  Safe Robot Navigation Via Multi-Modal Anomaly Detection , 2020, IEEE Robotics and Automation Letters.

[34]  Mohammad Reza Rajati,et al.  Outlier exposure with confidence control for out-of-distribution detection , 2021, Neurocomputing.

[35]  Trevor Darrell,et al.  Adversarial Feature Learning , 2016, ICLR.

[36]  Chuan Sheng Foo,et al.  Adversarially Learned Anomaly Detection , 2018, 2018 IEEE International Conference on Data Mining (ICDM).

[37]  Yang Yu,et al.  Out-of-Domain Detection for Low-Resource Text Classification Tasks , 2019, EMNLP.

[38]  Yuke Zhu,et al.  Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors , 2020, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[39]  Murat Sensoy,et al.  Uncertainty-Aware Deep Classifiers Using Generative Models , 2020, AAAI.

[40]  Krzysztof Czarnecki,et al.  Deformable PV-RCNN: Improving 3D Object Detection with Learned Deformations , 2020, ArXiv.

[41]  Antoine Wehenkel,et al.  You say Normalizing Flows I see Bayesian Networks , 2020, ArXiv.

[42]  Roland Siegwart,et al.  Object Classification Based on Unsupervised Learned Multi-Modal Features For Overcoming Sensor Failures , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[43]  Benjin Zhu,et al.  Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection , 2019, ArXiv.

[44]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[45]  Lenz Belzner,et al.  Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning , 2019, ArXiv.

[46]  Kibok Lee,et al.  Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.

[47]  Kevin Gimpel,et al.  A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.

[48]  Sebastian Nowozin,et al.  Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift , 2019, NeurIPS.

[49]  Ran El-Yaniv,et al.  Deep Anomaly Detection Using Geometric Transformations , 2018, NeurIPS.