Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
暂无分享,去创建一个
Chang Liu | Bo Li | Cristina Nita-Rotaru | Battista Biggio | Matthew Jagielski | Alina Oprea | Chang Liu | C. Nita-Rotaru | Alina Oprea | B. Biggio | Matthew Jagielski | Bo Li
[1] Ming Li,et al. Learning in the presence of malicious errors , 1993, STOC '88.
[2] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[3] Robert C. Bolles,et al. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.
[4] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[5] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[6] Chang Liu,et al. Robust Linear Regression Against Training Data Poisoning , 2017, AISec@CCS.
[7] Yi Ma,et al. Robust principal component analysis? , 2009, JACM.
[8] Somesh Jha,et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.
[9] Cristina Nita-Rotaru,et al. On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis , 2014, AISec '14.
[10] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[11] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[12] Susmita Sur-Kolay,et al. Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare , 2015, IEEE Journal of Biomedical and Health Informatics.
[13] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[14] Marius Kloft,et al. Security analysis of online centroid anomaly detection , 2010, J. Mach. Learn. Res..
[15] Claudia Eckert,et al. Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.
[16] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[17] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[18] Shie Mannor,et al. Robust Sparse Regression under Adversarial Corruption , 2013, ICML.
[19] Frederick R. Forst,et al. On robust estimation of the location parameter , 1980 .
[20] Gang Wang,et al. Man vs. Machine: Practical Adversarial Detection of Malicious Crowdsourcing Workers , 2014, USENIX Security Symposium.
[21] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[22] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[23] Shie Mannor,et al. Robust High Dimensional Sparse Regression and Matching Pursuit , 2013, ArXiv.
[24] Xiaojin Zhu,et al. The Security of Latent Dirichlet Allocation , 2015, AISTATS.
[25] Shie Mannor,et al. Robust Regression and Lasso , 2008, IEEE Transactions on Information Theory.
[26] B. Ripley,et al. Robust Statistics , 2018, Wiley Series in Probability and Statistics.
[27] Fabio Roli,et al. Security Evaluation of Pattern Classifiers under Attack , 2014, ArXiv.
[28] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[29] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[30] Robert Tibshirani,et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition , 2001, Springer Series in Statistics.
[31] Salvatore J. Stolfo,et al. Casting out Demons: Sanitizing Training Data for Anomaly Sensors , 2008, 2008 IEEE Symposium on Security and Privacy (sp 2008).
[32] Shie Mannor,et al. Robust Logistic Regression and Classification , 2014, NIPS.
[33] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[34] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[35] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[36] Dawn Xiaodong Song,et al. Limits of Learning-based Signature Generation with Adversaries , 2008, NDSS.
[37] James Newsome,et al. Paragraph: Thwarting Signature Learning by Training Maliciously , 2006, RAID.
[38] Nick Feamster,et al. PREDATOR: Proactive Recognition and Elimination of Domain Abuse at Time-Of-Registration , 2016, CCS.
[39] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[40] Wenke Lee,et al. Misleading worm signature generators using deliberate noise injection , 2006, 2006 IEEE Symposium on Security and Privacy (S&P'06).
[41] David E. Tyler. Robust Statistics: Theory and Methods , 2008 .
[42] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[43] Paul Barford,et al. Data Poisoning Attacks against Autoregressive Models , 2016, AAAI.
[44] Yevgeniy Vorobeychik,et al. Data Poisoning Attacks on Factorization-Based Collaborative Filtering , 2016, NIPS.
[45] Xiaojin Zhu,et al. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.
[46] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[47] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[48] Pavel Laskov,et al. Practical Evasion of a Learning-Based Classifier: A Case Study , 2014, 2014 IEEE Symposium on Security and Privacy.