暂无分享,去创建一个
[1] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[2] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[3] Yevgeniy Vorobeychik,et al. Feature Cross-Substitution in Adversarial Classification , 2014, NIPS.
[4] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[5] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[6] Micah Goldblum,et al. Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks , 2020, ICML.
[7] Wei Cai,et al. A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View , 2018, IEEE Access.
[8] N. Altman. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression , 1992 .
[9] Blaine Nelson,et al. Support Vector Machines Under Adversarial Label Noise , 2011, ACML.
[10] Shai Ben-David,et al. Understanding Machine Learning: From Theory to Algorithms , 2014 .
[11] Tudor Dumitras,et al. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[12] Debdeep Mukhopadhyay,et al. Adversarial Attacks and Defences: A Survey , 2018, ArXiv.
[13] Susmita Sur-Kolay,et al. Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare , 2015, IEEE Journal of Biomedical and Health Informatics.
[14] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[15] J. Doug Tygar,et al. Adversarial machine learning , 2019, AISec '11.
[16] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[17] Przemyslaw Klesk,et al. Sets of approximating functions with finite Vapnik-Chervonenkis dimension for nearest-neighbors algorithms , 2011, Pattern Recognit. Lett..
[18] José Camacho-Collados,et al. On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis , 2017, BlackboxNLP@EMNLP.
[19] Justin Hsu,et al. Data Poisoning against Differentially-Private Learners: Attacks and Defenses , 2019, IJCAI.
[20] Leslie G. Valiant,et al. A theory of the learnable , 1984, STOC '84.
[21] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[22] Onn Shehory,et al. Learner-Independent Targeted Data Omission Attacks , 2020 .
[23] Sébastien Marcel,et al. Torchvision the machine-vision package of torch , 2010, ACM Multimedia.
[24] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.