Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
暂无分享,去创建一个
[1] A. Odén,et al. Arguments for Fisher's Permutation Test , 1975 .
[2] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[3] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[4] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[5] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[6] Hans-Peter Kriegel,et al. Integrating structured biological data by Kernel Maximum Mean Discrepancy , 2006, ISMB.
[7] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[8] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[9] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[10] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[11] Bernhard Schölkopf,et al. A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..
[12] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[13] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[14] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[15] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[16] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[17] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[18] Eugenio Culurciello,et al. Robust Convolutional Neural Networks under Adversarial Noise , 2015, ArXiv.
[19] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[20] Uri Shaham,et al. Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization , 2015, ArXiv.
[21] Dale Schuurmans,et al. Learning with a Strong Adversary , 2015, ArXiv.
[22] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[23] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Terrance E. Boult,et al. Adversarial Diversity and Hard Positive Generation , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[25] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[28] Antonio Criminisi,et al. Measuring Neural Net Robustness with Constraints , 2016, NIPS.
[29] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[30] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[31] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[32] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[34] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[35] Prateek Mittal,et al. Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers , 2017, ArXiv.
[36] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[37] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[38] Xin Li,et al. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[39] Kevin Gimpel,et al. Early Methods for Detecting Adversarial Images , 2016, ICLR.
[40] Zhitao Gong,et al. Adversarial and Clean Data Are Not Twins , 2017, aiDM@SIGMOD.
[41] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[42] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[43] Emmanuel Abbe,et al. Provable limitations of deep learning , 2018, ArXiv.