暂无分享,去创建一个
[1] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[2] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[3] Ian J. Goodfellow,et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .
[4] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[5] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[6] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[7] Ting Wang,et al. DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[8] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[9] Xiangyu Zhang,et al. Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples , 2018, NeurIPS.
[10] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[11] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.