Learner-Independent Targeted Data Omission Attacks
暂无分享,去创建一个
[1] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2017, Pattern Recognit..
[2] Blaine Nelson,et al. Support Vector Machines Under Adversarial Label Noise , 2011, ACML.
[3] Justin Hsu,et al. Data Poisoning against Differentially-Private Learners: Attacks and Defenses , 2019, IJCAI.
[4] J. Doug Tygar,et al. Adversarial machine learning , 2019, AISec '11.
[5] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[6] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[7] Tudor Dumitras,et al. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[8] Wenke Lee,et al. Polymorphic Blending Attacks , 2006, USENIX Security Symposium.
[9] Wei Cai,et al. A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View , 2018, IEEE Access.
[10] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[11] Patrice Y. Simard,et al. Best practices for convolutional neural networks applied to visual document analysis , 2003, Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings..
[12] Claudia Eckert,et al. Support vector machines under adversarial label contamination , 2015, Neurocomputing.
[13] Dit-Yan Yeung,et al. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting , 2015, NIPS.
[14] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[15] Salvatore J. Stolfo,et al. Casting out Demons: Sanitizing Training Data for Anomaly Sensors , 2008, 2008 IEEE Symposium on Security and Privacy (sp 2008).
[16] N. Altman. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression , 1992 .
[17] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[18] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[19] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[20] Yevgeniy Vorobeychik,et al. Feature Cross-Substitution in Adversarial Classification , 2014, NIPS.
[21] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[22] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[23] Lawrence Carin,et al. Joint dictionary learning and topic modeling for image clustering , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[24] Susmita Sur-Kolay,et al. Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare , 2015, IEEE Journal of Biomedical and Health Informatics.