Using Intuition from Empirical Properties to Simplify Adversarial Training Defense
暂无分享,去创建一个
[1] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[2] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[3] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[4] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[5] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[6] Zheng Zhang,et al. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems , 2015, ArXiv.
[7] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[8] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[9] Sergei Izrailev,et al. Machine Learning at Scale , 2014, ArXiv.
[10] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[11] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[12] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[13] Kun He,et al. Improving the Generalization of Adversarial Training with Domain Adaptation , 2018, ICLR.
[14] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[15] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.