暂无分享,去创建一个
[1] Philip H. S. Torr,et al. With Friends Like These, Who Needs Adversaries? , 2018, NeurIPS.
[2] David J. Fleet,et al. Adversarial Manipulation of Deep Representations , 2015, ICLR.
[3] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[4] Alexandros G. Dimakis,et al. The Robust Manifold Defense: Adversarial Training using Generative Models , 2017, ArXiv.
[5] Geoffrey E. Hinton,et al. Matrix capsules with EM routing , 2018, ICLR.
[6] Matthias Bethge,et al. Robust Perception through Analysis by Synthesis , 2018, ArXiv.
[7] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[8] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[9] Geoffrey E. Hinton,et al. Dynamic Routing Between Capsules , 2017, NIPS.
[10] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[11] Yao Zhao,et al. Adversarial Attacks and Defences Competition , 2018, ArXiv.
[12] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[13] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[14] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[15] Matthias Bethge,et al. Towards the first adversarially robust neural network model on MNIST , 2018, ICLR.
[16] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[17] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[18] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[19] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.