Label-Assisted Memory Autoencoder for Unsupervised Out-of-Distribution Detection

Out-of-Distribution (OoD) detectors based on AutoEncoder (AE) rely on an underlying assumption that an AE network cannot reconstruct OoD data as good as in-distribution (ID) data when it is constructed based on ID data only. However, this assumption may be violated in practice, resulting in a degradation in detection performance. Therefore, alleviating the factors violating this assumption can potentially improve the robustness of OoD performance. Our empirical studies also show that image complexity can be another factor hindering detection performance for AE-based detectors. To cater for these issues, we propose two OoD detectors LAMAE and LAMAE+. Both can be trained without the availability of any OoD-related data. The key idea is to regularize the AE network architecture with a classifier and a label-assisted memory to confine the reconstruction of OoD data while retaining the reconstruction ability for ID data. We also adjust the reconstruction error by taking image complexity into consideration. Experimental studies show that the proposed OoD detectors can perform well on a wider range of OoD scenarios.

[1]  Joost van de Weijer,et al.  Metric Learning for Novelty and Anomaly Detection , 2018, BMVC.

[2]  Thomas G. Dietterich,et al.  Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.

[3]  Yee Whye Teh,et al.  Do Deep Generative Models Know What They Don't Know? , 2018, ICLR.

[4]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[5]  Heinrich Schulz,et al.  Anomaly Detection with Deep Perceptual Autoencoders , 2020, ArXiv.

[6]  Chen Shen,et al.  Spatio-Temporal AutoEncoder for Video Anomaly Detection , 2017, ACM Multimedia.

[7]  Jasper Snoek,et al.  Likelihood Ratios for Out-of-Distribution Detection , 2019, NeurIPS.

[8]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[9]  Kibok Lee,et al.  A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.

[10]  Nicu Sebe,et al.  Learning Deep Representations of Appearance and Motion for Anomalous Event Detection , 2015, BMVC.

[11]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.

[12]  Alexander Binder,et al.  Deep One-Class Classification , 2018, ICML.

[13]  R. Srikant,et al.  Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.

[14]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[15]  Hong Zhang,et al.  Thermodynamics-Based Evaluation of Various Improved Shannon Entropies for Configurational Information of Gray-Level Images , 2018, Entropy.

[16]  Xiaogang Wang,et al.  Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[17]  Jesse Davis,et al.  Fast Distance-Based Anomaly Detection in Images Using an Inception-Like Autoencoder , 2019, DS.

[18]  Svetha Venkatesh,et al.  Memorizing Normality to Detect Anomaly: Memory-Augmented Deep Autoencoder for Unsupervised Anomaly Detection , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[19]  Roland Vollgraf,et al.  Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.

[20]  Joseph Keshet,et al.  Out-of-Distribution Detection using Multiple Semantic Label Representations , 2018, NeurIPS.

[21]  Rick Salay,et al.  Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output , 2019, ArXiv.

[22]  Hongxia Jin,et al.  Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Bo Zong,et al.  Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection , 2018, ICLR.

[24]  Kiyoharu Aizawa,et al.  Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[25]  Richard G. Baraniuk,et al.  Out-of-Distribution Detection Using Neural Rendering Generative Models , 2019, ArXiv.

[26]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[27]  Du-Yih Tsai,et al.  Information Entropy Measure for Evaluation of Image Quality , 2008, Journal of Digital Imaging.

[28]  Randy C. Paffenroth,et al.  Anomaly Detection with Robust Deep Autoencoders , 2017, KDD.

[29]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[30]  Kibok Lee,et al.  Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.

[31]  Sungzoon Cho,et al.  Variational Autoencoder based Anomaly Detection using Reconstruction Probability , 2015 .

[32]  Rick Salay,et al.  Improving Reconstruction Autoencoder Out-of-distribution Detection with Mahalanobis Distance , 2018, ArXiv.

[33]  Vishal M. Patel,et al.  Learning Deep Features for One-Class Classification , 2018, IEEE Transactions on Image Processing.

[34]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[35]  Charu C. Aggarwal,et al.  Outlier Detection with Autoencoder Ensembles , 2017, SDM.

[36]  Dong Wang,et al.  Anomaly Detection in Traffic Scenes via Spatial-Aware Motion Reconstruction , 2017, IEEE Transactions on Intelligent Transportation Systems.

[37]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.