暂无分享,去创建一个
Marco Melis | Battista Biggio | Ambra Demontis | Maura Pintor | Angelo Sotgiu | B. Biggio | Marco Melis | Ambra Demontis | Maura Pintor | Angelo Sotgiu
[1] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[2] Ian J. Goodfellow,et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .
[3] Martin Wistuba,et al. Adversarial Robustness Toolbox v1.0.0 , 2018, 1807.01069.
[4] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[5] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[6] J. Doug Tygar,et al. Adversarial machine learning , 2019, AISec '11.
[7] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[8] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[9] W. Brendel,et al. Foolbox: A Python toolbox to benchmark the robustness of machine learning models , 2017 .
[10] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[11] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[12] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[13] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[14] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[15] Gavin Brown,et al. Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).
[16] Fabio Roli,et al. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.