A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View
暂无分享,去创建一个
Wei Cai | Victor C. M. Leung | Qiang Liu | Shui Yu | Pan Li | Wentao Zhao | Shui Yu | Wei Cai | Pan Li | Qiang Liu | Wentao Zhao
[1] Xue-wen Chen,et al. Big Data Deep Learning: Challenges and Perspectives , 2014, IEEE Access.
[2] Fabio Roli,et al. Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection , 2017, IEEE Transactions on Dependable and Secure Computing.
[3] Yizheng Chen,et al. Practical Attacks Against Graph-based Clustering , 2017, CCS.
[4] Marius Kloft,et al. A framework for quantitative security analysis of machine learning , 2009, AISec '09.
[5] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[6] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[7] Marius Kloft,et al. Security analysis of online centroid anomaly detection , 2010, J. Mach. Learn. Res..
[8] Fabio Roli,et al. Randomized Prediction Games for Adversarial Machine Learning , 2016, IEEE Transactions on Neural Networks and Learning Systems.
[9] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[10] Brent Lagesse,et al. Analysis of Causative Attacks against SVMs Learning from Data Streams , 2017, IWSPA@CODASPY.
[11] George K. Karagiannidis,et al. Efficient Machine Learning for Big Data: A Review , 2015, Big Data Res..
[12] Wenjun Zeng,et al. Compressive sensing based secure multiparty privacy preserving framework for collaborative data-mining and signal processing , 2014, 2014 IEEE International Conference on Multimedia and Expo (ICME).
[13] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[14] Louis J. M. Aslett,et al. Encrypted statistical machine learning: new privacy preserving methods , 2015, ArXiv.
[15] Fabio Roli,et al. Security Evaluation of Support Vector Machines in Adversarial Environments , 2014, ArXiv.
[16] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[17] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[18] Fabio Roli,et al. Poisoning attacks to compromise face templates , 2013, 2013 International Conference on Biometrics (ICB).
[19] Micah Sherr,et al. Hidden Voice Commands , 2016, USENIX Security Symposium.
[20] Battista Biggio. Machine Learning under Attack: Vulnerability Exploitation and Security Measures , 2016, IH&MMSec.
[21] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[22] Ivan Damgård,et al. Multiparty Computation from Somewhat Homomorphic Encryption , 2012, IACR Cryptol. ePrint Arch..
[23] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[24] Ying Tan,et al. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN , 2017, DMBD.
[25] Michael Naehrig,et al. CryptoNets: applying neural networks to encrypted data with high throughput and accuracy , 2016, ICML 2016.
[26] Athanasios V. Vasilakos,et al. Machine learning on big data: Opportunities and challenges , 2017, Neurocomputing.
[27] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[28] Fabio Roli,et al. Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks , 2011, MCS.
[29] Daniel Cullina,et al. Enhancing robustness of machine learning systems via data transformations , 2017, 2018 52nd Annual Conference on Information Sciences and Systems (CISS).
[30] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[31] Claudia Eckert,et al. Support vector machines under adversarial label contamination , 2015, Neurocomputing.
[32] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[33] Fabio Roli,et al. Pattern Recognition Systems under Attack: Design Issues and Research Challenges , 2014, Int. J. Pattern Recognit. Artif. Intell..
[34] Blaine Nelson,et al. Misleading Learners: Co-opting Your Spam Filter , 2009 .
[35] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[36] Pavel Laskov,et al. Detection of Malicious PDF Files Based on Hierarchical Document Structure , 2013, NDSS.
[37] Miriam A. M. Capretz,et al. Machine Learning With Big Data: Challenges and Approaches , 2017, IEEE Access.
[38] Bo An,et al. Efficient Label Contamination Attacks Against Black-Box Learning Models , 2017, IJCAI.
[39] Patrick P. K. Chan,et al. Adversarial Feature Selection Against Evasion Attacks , 2016, IEEE Transactions on Cybernetics.
[40] Sailik Sengupta,et al. MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense , 2017, AAAI Workshops.
[41] Shyhtsun Felix Wu,et al. On Attacking Statistical Spam Filters , 2004, CEAS.
[42] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[43] Song Guo,et al. Malware Propagation in Large-Scale Networks , 2015, IEEE Transactions on Knowledge and Data Engineering.
[44] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[46] Xiaojin Zhu,et al. The Security of Latent Dirichlet Allocation , 2015, AISTATS.
[47] Ling Huang,et al. Learning in a Large Function Space: Privacy-Preserving Mechanisms for SVM Learning , 2009, J. Priv. Confidentiality.
[48] Michael P. Wellman,et al. SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).
[49] Tobias Scheffer,et al. Stackelberg games for adversarial prediction problems , 2011, KDD.
[50] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[51] Amir Globerson,et al. Nightmare at test time: robust learning by feature deletion , 2006, ICML.
[52] Dit-Yan Yeung,et al. Towards Bayesian Deep Learning: A Framework and Some Existing Methods , 2016, IEEE Transactions on Knowledge and Data Engineering.
[53] Roman Garnett,et al. Differentially Private Bayesian Optimization , 2015, ICML.
[54] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[55] Lior Rokach,et al. Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers , 2017, RAID.
[56] Malcolm I. Heywood,et al. Automatically Evading IDS Using GP Authored Attacks , 2007, 2007 IEEE Symposium on Computational Intelligence in Security and Defense Applications.
[57] Nina Narodytska,et al. Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[58] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[59] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[60] Somesh Jha,et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.
[61] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[62] Claudia Eckert,et al. Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.
[63] Alexander J. Smola,et al. Convex Learning with Invariances , 2007, NIPS.
[64] Dan Boneh,et al. The Space of Transferable Adversarial Examples , 2017, ArXiv.
[65] Fabio Roli,et al. Poisoning Adaptive Biometric Systems , 2012, SSPR/SPR.
[66] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[67] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[68] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[69] Tobias Scheffer,et al. Static prediction games for adversarial learning problems , 2012, J. Mach. Learn. Res..
[70] Jeffrey F. Naughton,et al. A Methodology for Formalizing Model-Inversion Attacks , 2016, 2016 IEEE 29th Computer Security Foundations Symposium (CSF).
[71] R. Venkatesh Babu,et al. Fast Feature Fool: A data independent approach to universal adversarial perturbations , 2017, BMVC.
[72] Fabio Roli,et al. Poisoning behavioral malware clustering , 2014, AISec '14.
[73] Susmita Sur-Kolay,et al. Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare , 2015, IEEE Journal of Biomedical and Health Informatics.
[74] Michael P. Wellman,et al. Towards the Science of Security and Privacy in Machine Learning , 2016, ArXiv.
[75] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[76] Yanjun Qi,et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers , 2016, NDSS.
[77] Xinzhong Zhu,et al. Super-class Discriminant Analysis: A novel solution for heteroscedasticity , 2013, Pattern Recognit. Lett..
[78] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[79] Christopher Meek,et al. Good Word Attacks on Statistical Spam Filters , 2005, CEAS.
[80] Fabio Roli,et al. Is data clustering in adversarial settings secure? , 2013, AISec.
[81] Giorgio Giacinto,et al. Looking at the bag is not enough to find the bomb: an evasion of structural methods for malicious PDF files detection , 2013, ASIA CCS '13.
[82] Fabio Roli,et al. Poisoning Complete-Linkage Hierarchical Clustering , 2014, S+SSPR.
[83] Fabio Roli,et al. Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.
[84] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[85] Jianping Yin,et al. Sampling Attack against Active Learning in Adversarial Environment , 2012, MDAI.
[86] Yevgeniy Vorobeychik,et al. Feature Cross-Substitution in Adversarial Classification , 2014, NIPS.
[87] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[88] Ricky Laishram,et al. Curie: A method for protecting SVM Classifier from Poisoning Attack , 2016, ArXiv.
[89] Yiran Chen,et al. Generative Poisoning Attack Method Against Neural Networks , 2017, ArXiv.
[90] Christian Gagné,et al. Robustness to Adversarial Examples through an Ensemble of Specialists , 2017, ICLR.
[91] Paul Barford,et al. Data Poisoning Attacks against Autoregressive Models , 2016, AAAI.
[92] Yevgeniy Vorobeychik,et al. Data Poisoning Attacks on Factorization-Based Collaborative Filtering , 2016, NIPS.
[93] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[94] Shui Yu,et al. Big Privacy: Challenges and Opportunities of Privacy Study in the Age of Big Data , 2016, IEEE Access.
[95] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[96] Pavel Laskov,et al. Practical Evasion of a Learning-Based Classifier: A Case Study , 2014, 2014 IEEE Symposium on Security and Privacy.
[97] Stefano Rizzi,et al. What-If Analysis , 2018, Encyclopedia of Database Systems.
[98] Cynthia Dwork,et al. Differential Privacy , 2006, ICALP.
[99] Fabio Roli,et al. Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues , 2013, Inf. Sci..
[100] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[101] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[102] Úlfar Erlingsson,et al. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response , 2014, CCS.
[103] Fabio Roli,et al. Multiple classifier systems for robust classifier design in adversarial environments , 2010, Int. J. Mach. Learn. Cybern..
[104] Pascal Frossard,et al. Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.
[105] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[106] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.