Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders

Deep neural networks often suffer from overconfidence which can be partly remedied by improved out-of-distribution detection. For this purpose, we propose a novel approach that allows for the generation of out-of-distribution datasets based on a given in-distribution dataset. This new dataset can then be used to improve out-of-distribution detection for the given dataset and machine learning task at hand. The samples in this dataset are with respect to the feature space close to the in-distribution dataset and therefore realistic and plausible. Hence, this dataset can also be used to safeguard neural networks, i.e., to validate the generalization performance. Our approach first generates suitable representations of an in-distribution dataset using an autoencoder and then transforms them using our novel proposed Soft Brownian Offset method. After transformation, the decoder part of the autoencoder allows for the generation of these implicit out-of-distribution samples. This newly generated dataset then allows for mixing with other datasets and thus improved training of an out-of-distribution classifier, increasing its performance. Experimentally, we show that our approach is promising for time series using synthetic data. Using our new method, we also show in a quantitative case study that we can improve the out-of-distribution detection for the MNIST dataset. Finally, we provide another case study on the synthetic generation of out-of-distribution trajectories, which can be used to validate trajectory prediction algorithms for automated driving.

[1]  Charles Blundell,et al.  Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.

[2]  Kilian Q. Weinberger,et al.  On Calibration of Modern Neural Networks , 2017, ICML.

[3]  Bernhard Sick,et al.  Novelty detection with CANDIES: a holistic technique based on probabilistic models , 2018, Int. J. Mach. Learn. Cybern..

[4]  Chull Park Representations of Gaussian processes by Wiener processes. , 1981 .

[5]  Kibok Lee,et al.  Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.

[6]  Sebastian J. Wirkert,et al.  Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety , 2021, ArXiv.

[7]  Patrick van der Smagt,et al.  Learning Flat Latent Manifolds with VAEs , 2020, ICML.

[8]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[9]  R. Srikant,et al.  Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.

[10]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[11]  Kevin Gimpel,et al.  A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.

[12]  Gordon Wetzstein,et al.  Implicit Neural Representations with Periodic Activation Functions , 2020, NeurIPS.

[13]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[14]  Klaus C. J. Dietmayer,et al.  Cooperative multi sensor network for traffic safety applications at intersections , 2012, 2012 15th International IEEE Conference on Intelligent Transportation Systems.

[15]  Thomas G. Dietterich,et al.  Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.

[16]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[17]  B. Sick,et al.  Towards Corner Case Detection by Modeling the Uncertainty of Instance Segmentation Networks , 2020, ICPR Workshops.

[18]  Konrad Doll,et al.  Trajectory Forecasts with Uncertainties of Vulnerable Road Users by Means of Neural Networks , 2019, 2019 IEEE Intelligent Vehicles Symposium (IV).

[19]  Maarten Bieshaar Cooperative intention detection using machine learning: advanced cyclist protection in the context of automated driving , 2021 .

[20]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[21]  Bernhard Sick,et al.  Separation of Aleatoric and Epistemic Uncertainty in Deterministic Deep Neural Networks , 2021, 2020 25th International Conference on Pattern Recognition (ICPR).

[22]  Yixuan Li,et al.  Robust Out-of-distribution Detection for Neural Networks , 2020, 2003.09711.

[23]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[24]  Rick Salay,et al.  Out-of-distribution Detection in Classifiers via Generation , 2019, ArXiv.

[25]  Alan Bundy,et al.  Dynamic Time Warping , 1984 .

[26]  H. L. Dryden,et al.  Investigations on the Theory of the Brownian Movement , 1957 .

[27]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Jasper Snoek,et al.  Likelihood Ratios for Out-of-Distribution Detection , 2019, NeurIPS.

[29]  Kumar Sricharan,et al.  Building robust classifiers through generation of confident out of distribution examples , 2018, ArXiv.

[30]  M. Donsker Justification and Extension of Doob's Heuristic Approach to the Kolmogorov- Smirnov Theorems , 1952 .

[31]  Tim Fingscheidt,et al.  An Application-Driven Conceptualization of Corner Cases for Perception in Highly Automated Driving , 2021, 2021 IEEE Intelligent Vehicles Symposium (IV).

[32]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[33]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[34]  Roman V. Belavkin,et al.  Relation between the Kantorovich-Wasserstein metric and the Kullback-Leibler divergence , 2016, ArXiv.

[35]  Konrad Doll,et al.  Detecting Intentions of Vulnerable Road Users Based on Collective Intelligence , 2018, ArXiv.

[36]  Andrey Malinin,et al.  Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness , 2019, NeurIPS.

[37]  Wanlei Zhao,et al.  Sequential VAE-LSTM for Anomaly Detection on Time Series , 2019, ArXiv.

[38]  Mark J. F. Gales,et al.  Predictive Uncertainty Estimation via Prior Networks , 2018, NeurIPS.

[39]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..