暂无分享,去创建一个
[1] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[2] Blaine Nelson,et al. Support Vector Machines Under Adversarial Label Noise , 2011, ACML.
[3] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[4] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[5] Shie Mannor,et al. Robust Logistic Regression and Classification , 2014, NIPS.
[6] Claudia Eckert,et al. Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.
[7] Maria-Florina Balcan,et al. Efficient Learning of Linear Separators under Bounded Noise , 2015, COLT.
[8] Xiaojin Zhu,et al. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.
[9] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[10] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[11] Maria-Florina Balcan,et al. The Power of Localization for Efficiently Learning Linear Separators with Noise , 2013, J. ACM.
[12] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[13] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[14] Luis Muñoz-González,et al. Don't fool Me!: Detection, Characterisation and Diagnosis of Spoofed and Masked Events in Wireless Sensor Networks , 2017, IEEE Transactions on Dependable and Secure Computing.
[15] Luis Muñoz-González,et al. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection , 2018, ArXiv.
[16] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).