Efficient Defenses Against Adversarial Attacks
暂无分享,去创建一个
Valentina Zantedeschi | Maria-Irina Nicolae | Ambrish Rawat | Ambrish Rawat | Maria-Irina Nicolae | Valentina Zantedeschi
[1] Shan Sung Liew,et al. Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems , 2016, Neurocomputing.
[2] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Patrick D. McDaniel,et al. On the Effectiveness of Defensive Distillation , 2016, ArXiv.
[4] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[5] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[6] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[7] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[8] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[9] Yanjun Qi,et al. Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples , 2017, ArXiv.
[10] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[11] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Michael A. Arbib,et al. The handbook of brain theory and neural networks , 1995, A Bradford book.
[13] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[14] Terrance E. Boult,et al. Towards Open Set Deep Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[17] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[18] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[19] Ming Li,et al. Learning in the presence of malicious errors , 1993, STOC '88.
[20] Fabio Roli,et al. Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.
[21] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[22] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[23] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[24] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[25] Khalil-HaniMohamed,et al. Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems , 2016 .
[26] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[27] Zhitao Gong,et al. Adversarial and Clean Data Are Not Twins , 2017, aiDM@SIGMOD.
[28] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[29] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[30] Michael I. Jordan,et al. The Handbook of Brain Theory and Neural Networks , 2002 .
[31] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[32] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[33] Samy Bengio,et al. The Handbook of Brain Theory and Neural Networks , 2002 .
[34] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[35] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[36] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[37] Pascal Frossard,et al. Analysis of universal adversarial perturbations , 2017, ArXiv.
[38] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[39] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[40] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[41] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[43] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[44] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.