暂无分享,去创建一个
Hang Su | Zihao Xiao | Yinpeng Dong | Jun Zhu | Qi-An Fu | Tianyu Pang | Xiao Yang | Jun Zhu | Hang Su | Tianyu Pang | Zihao Xiao | Yinpeng Dong | Xiao Yang | Qi-An Fu
[1] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[2] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[3] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[4] Ning Chen,et al. Improving Adversarial Robustness via Promoting Ensemble Diversity , 2019, ICML.
[5] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[6] Aditi Raghunathan,et al. Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.
[7] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[8] Matthias Bethge,et al. Adversarial Vision Challenge , 2018, The NeurIPS '18 Competition.
[9] Dawn Xiaodong Song,et al. Decision Boundary Analysis of Adversarial Examples , 2018, ICLR.
[10] Nicholas Carlini. A critique of the DeepSec Platform for Security Analysis of Deep Learning Models , 2019, ArXiv.
[11] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[12] Martin Wistuba,et al. Adversarial Robustness Toolbox v1.0.0 , 2018, 1807.01069.
[13] Ian J. Goodfellow,et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .
[14] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[15] Cho-Jui Hsieh,et al. Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network , 2018, ICLR.
[16] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[17] Zoubin Ghahramani,et al. A study of the effect of JPG compression on adversarial images , 2016, ArXiv.
[18] Jun Zhu,et al. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[20] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[21] Yang Song,et al. Constructing Unrestricted Adversarial Examples with Generative Models , 2018, NeurIPS.
[22] Yao Zhao,et al. Adversarial Attacks and Defences Competition , 2018, ArXiv.