A taxonomy and terminology of adversarial machine learning
暂无分享,去创建一个
Elham Tabassi | Michael Hadjimichael | Kevin J. Burns | Andres D. Molina-Markham | Julian T. Sexton | Elham Tabassi | M. Hadjimichael | K. Burns | Julian Sexton | Andres Molina-Markham
[1] Patrick D. McDaniel,et al. Making machine learning robust against adversarial inputs , 2018, Commun. ACM.
[2] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[3] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[4] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[5] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[6] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[7] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[8] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[9] Debdeep Mukhopadhyay,et al. Adversarial Attacks and Defences: A Survey , 2018, ArXiv.
[10] Filip Karlo Dosilovic,et al. Explainable artificial intelligence: A survey , 2018, 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).
[11] Fabio Roli,et al. Multiple classifier systems for robust classifier design in adversarial environments , 2010, Int. J. Mach. Learn. Cybern..
[12] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[13] Michael P. Wellman,et al. SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).
[14] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[15] Fabio Roli,et al. On Security and Sparsity of Linear Classifiers for Adversarial Settings , 2016, S+SSPR.
[16] David Weinberger,et al. Accountability of AI Under the Law: The Role of Explanation , 2017, ArXiv.
[17] Wei Cai,et al. A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View , 2018, IEEE Access.
[18] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[19] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[20] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[21] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[22] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[23] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[24] Terrance E. Boult,et al. Towards Robust Deep Neural Networks with BANG , 2016, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).