Dense anomaly detection by robust learning on synthetic negative data

Standard machine learning is unable to accommodate inputs which do not belong to the training distribution. The resulting models often give rise to confident incorrect predictions which may lead to devastating consequences. This problem is especially demanding in the context of dense prediction since input images may be only partially anomalous. Previous work has addressed dense anomaly detection by discriminative training on mixed-content images. We extend this approach with synthetic negative patches which simultaneously achieve high inlier likelihood and uniform discriminative prediction. We generate synthetic negatives with normalizing flows due to their outstanding distribution coverage and capability to generate samples at different resolutions. We also propose to detect anomalies according to a principled information-theoretic criterion which can be consistently applied through training and inference. The resulting models set the new state of the art on standard benchmarks and datasets in spite of minimal computational overhead and refraining from auxiliary negative data.

[1]  Daniel Olmeda Reino,et al.  Road Anomaly Detection by Partial Image Reconstruction with Segmentation Coupling , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[2]  Jaegul Choo,et al.  Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[3]  Mark Goldstein,et al.  Understanding Failures in Out-of-Distribution Detection with Deep Generative Models , 2021, ICML.

[4]  Ivan Grubisic,et al.  Densely connected normalizing flows , 2021, NeurIPS.

[5]  Pascal Fua,et al.  SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation , 2021, NeurIPS Datasets and Benchmarks.

[6]  Roland Siegwart,et al.  Pixel-wise Anomaly Detection in Complex Driving Scenes , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Marin Orsic,et al.  Efficient semantic segmentation with pyramidal fusion , 2021, Pattern Recognit..

[8]  Pascal Fua,et al.  Detecting Road Obstacles by Erasing Them , 2020, ArXiv.

[9]  M. Rottmann,et al.  Entropy Maximization and Meta Classification for Out-of-Distribution Detection in Semantic Segmentation , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[10]  Petra Bevandic,et al.  Dense open-set recognition with synthetic outliers generated by Real NVP , 2020, VISIGRAPP.

[11]  Vishal M. Patel,et al.  Generative-Discriminative Feature Representations for Open-Set Recognition , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Ang Li,et al.  Hybrid Models for Open Set Recognition , 2020, ECCV.

[13]  Yingda Xia,et al.  Synthesize then Compare: Detecting Failures and Anomalies for Semantic Segmentation , 2020, ECCV.

[14]  Marin Oršić,et al.  Simultaneous Semantic Segmentation and Outlier Detection in Presence of Domain Shift , 2019, GCPR.

[15]  Sinisa Segvic,et al.  Efficient Ladder-Style DenseNets for Semantic Segmentation of Large Images , 2019, IEEE Transactions on Intelligent Transportation Systems.

[16]  Pascal Fua,et al.  Detecting the Unexpected via Image Resynthesis , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[17]  Roland Siegwart,et al.  The Fishyscapes Benchmark: Measuring Blind Spots in Semantic Segmentation , 2019, International Journal of Computer Vision.

[18]  C. Schmid,et al.  Adaptive Density Estimation for Generative Models , 2019, NeurIPS.

[19]  Terrance E. Boult,et al.  Reducing Network Agnostophobia , 2018, NeurIPS.

[20]  Thomas G. Dietterich,et al.  Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.

[21]  Yann LeCun,et al.  Predicting Future Instance Segmentations by Forecasting Convolutional Features , 2018, ECCV.

[22]  Mark J. F. Gales,et al.  Predictive Uncertainty Estimation via Prior Networks , 2018, NeurIPS.

[23]  Kibok Lee,et al.  Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.

[24]  Kilian Q. Weinberger,et al.  On Calibration of Modern Neural Networks , 2017, ICML.

[25]  R. Srikant,et al.  Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.

[26]  Alex Kendall,et al.  What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.

[27]  Kevin Gimpel,et al.  A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.

[28]  Sebastian Ramos,et al.  Lost and Found: detecting small road hazards for self-driving vehicles , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[29]  Samy Bengio,et al.  Density estimation using Real NVP , 2016, ICLR.

[30]  Koray Kavukcuoglu,et al.  Pixel Recurrent Neural Networks , 2016, ICML.

[31]  Terrance E. Boult,et al.  Towards Open Set Deep Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[32]  Yann LeCun,et al.  Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches , 2015, J. Mach. Learn. Res..

[33]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[34]  Camille Couprie,et al.  Learning Hierarchical Features for Scene Labeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[35]  Anderson Rocha,et al.  Toward Open Set Recognition , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[36]  Geoffrey E. Hinton,et al.  Deep Boltzmann Machines , 2009, AISTATS.

[37]  Dawn Song,et al.  Scaling Out-of-Distribution Detection for Real-World Settings , 2022, ICML.

[38]  Longbing Cao,et al.  Revealing Distributional Vulnerability of Explicit Discriminators by Implicit Generators , 2021, ArXiv.

[39]  Xianchao Zhang,et al.  Deep anomaly detection with self-supervised learning and adversarial training , 2022, Pattern Recognit..