Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
暂无分享,去创建一个
[1] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[2] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[3] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[4] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[5] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[6] Tara N. Sainath,et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.
[7] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[8] Ying Cai,et al. Fake Co-visitation Injection Attacks to Recommender Systems , 2017, NDSS.
[9] Xin Li,et al. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[10] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[11] Daniel Kifer,et al. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization , 2016, ArXiv.
[12] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[13] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Zhitao Gong,et al. Adversarial and Clean Data Are Not Twins , 2017, aiDM@SIGMOD.
[15] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[16] Prateek Saxena,et al. Auror: defending against poisoning attacks in collaborative deep learning systems , 2016, ACSAC.
[17] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[18] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[19] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[20] Yevgeniy Vorobeychik,et al. Data Poisoning Attacks on Factorization-Based Collaborative Filtering , 2016, NIPS.
[21] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[22] Wenbo Guo,et al. Adversary Resistant Deep Neural Networks with an Application to Malware Detection , 2016, KDD.
[23] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[24] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[25] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[26] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[27] John C. Duchi,et al. Certifiable Distributional Robustness with Principled Adversarial Training , 2017, ArXiv.
[28] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[29] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[30] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[31] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[32] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[33] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[34] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[35] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[37] Kevin Gimpel,et al. Early Methods for Detecting Adversarial Images , 2016, ICLR.
[38] Hariharan Narayanan,et al. Sample Complexity of Testing the Manifold Hypothesis , 2010, NIPS.
[39] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.