Performance Analysis of Out-of-Distribution Detection on Various Trained Neural Networks
暂无分享,去创建一个
Christian Berger | Markus Borg | Cristofer Englund | Jens Henriksson | Lars Tornberg | Sankar Raman Sathyamoorthy | C. Berger | Cristofer Englund | Markus Borg | S. Sathyamoorthy | Lars Tornberg | Jens Henriksson
[1] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[2] Rick Salay,et al. An Analysis of ISO 26262: Using Machine Learning Safely in Automotive Software , 2017, ArXiv.
[3] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[4] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[5] Rick Salay,et al. Towards a Framework to Manage Perceptual Uncertainty for Safe Automated Driving , 2018, SAFECOMP Workshops.
[6] Klaus-Robert Müller,et al. Efficient BackProp , 2012, Neural Networks: Tricks of the Trade.
[7] Yue Zhao,et al. DLFuzz: differential fuzzing testing of deep learning systems , 2018, ESEC/SIGSOFT FSE.
[8] Sungzoon Cho,et al. Variational Autoencoder based Anomaly Detection using Reconstruction Probability , 2015 .
[9] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[10] Markus Borg,et al. Automotive Safety and Machine Learning: Initial Results from a Study on How to Adapt the ISO 26262 Safety Standard , 2018, 2018 IEEE/ACM 1st International Workshop on Software Engineering for AI in Autonomous Systems (SEFAIAS).
[11] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[12] Junfeng Yang,et al. DeepXplore , 2019, Commun. ACM.
[13] Markus Borg,et al. Traceability and Deep Learning - Safety-critical Systems with Traces Ending in Deep Neural Networks , 2017 .
[14] Markus Borg,et al. Safely Entering the Deep: A Review of Verification and Validation for Machine Learning and a Challenge Elicitation in the Automotive Industry , 2018, Journal of Automotive Software Engineering.
[15] R. Srikant,et al. Principled Detection of Out-of-Distribution Examples in Neural Networks , 2017, ArXiv.
[16] R. Srikant,et al. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.
[17] Christian Berger,et al. Towards Structured Evaluation of Deep Neural Network Supervisors , 2019, 2019 IEEE International Conference On Artificial Intelligence Testing (AITest).
[18] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[19] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[20] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[21] Terrance E. Boult,et al. Towards Open Set Deep Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[23] Shin Yoo,et al. Guiding Deep Learning System Testing Using Surprise Adequacy , 2018, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).
[24] Thomas G. Dietterich,et al. Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.
[25] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.