The Security of Machine Learning Systems
暂无分享,去创建一个
[1] Justin Domke,et al. Generic Methods for Optimization-Based Modeling , 2012, AISTATS.
[2] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[3] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[4] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[5] Claudia Eckert,et al. Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.
[6] Ryan P. Adams,et al. Gradient-based Hyperparameter Optimization through Reversible Learning , 2015, ICML.
[7] Xiaojin Zhu,et al. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.
[8] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[9] Gavin Brown,et al. Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).
[10] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[11] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[12] Luis Muñoz-González,et al. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection , 2018, ArXiv.
[13] Luis Muñoz-González,et al. The Secret of Machine Learning , 2018 .
[14] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[15] Luis Muñoz-González,et al. Label Sanitization against Label Flipping Poisoning Attacks , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.
[16] Fabio Roli,et al. Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.
[17] Prateek Mittal,et al. Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers , 2017, ArXiv.
[18] Shie Mannor,et al. Robust Logistic Regression and Classification , 2014, NIPS.
[19] J. Doug Tygar,et al. Adversarial machine learning , 2019, AISec '11.
[20] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[21] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[22] Patrick D. McDaniel,et al. Machine Learning in Adversarial Settings , 2016, IEEE Security & Privacy.
[23] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[24] Zhitao Gong,et al. Adversarial and Clean Data Are Not Twins , 2017, aiDM@SIGMOD.
[25] Chuan-Sheng Foo,et al. Efficient multiple hyperparameter learning for log-linear models , 2007, NIPS.
[26] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[27] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[28] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[29] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[30] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[31] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[32] Barak A. Pearlmutter. Fast Exact Multiplication by the Hessian , 1994, Neural Computation.
[33] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[34] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).