暂无分享,去创建一个
Rick Salay | Krzysztof Czarnecki | Sachin Vernekar | Ashish Gaurav | Taylor Denouden | Buu Phan | Vahdat Abdelzad
[1] Stanislav Pidhorskyi,et al. Generative Probabilistic Novelty Detection with Adversarial Autoencoders , 2018, NeurIPS.
[2] Thomas G. Dietterich,et al. Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.
[3] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Kibok Lee,et al. Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.
[5] Anderson Rocha,et al. Toward Open Set Recognition , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] Matthias Hein,et al. Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Kumar Sricharan,et al. Building robust classifiers through generation of confident out of distribution examples , 2018, ArXiv.
[8] Giacomo Spigler,et al. Denoising Autoencoders for Overgeneralization in Neural Networks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[9] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[10] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[11] Matthias Hein,et al. A randomized gradient-free attack on ReLU networks , 2018, GCPR.
[12] Rick Salay,et al. Improving Reconstruction Autoencoder Out-of-distribution Detection with Mahalanobis Distance , 2018, ArXiv.
[13] Pieter Abbeel,et al. Safer Classification by Synthesis , 2017, ArXiv.