暂无分享,去创建一个
Mehmet Emre Gursoy | Ling Liu | Stacey Truex | Vale Tolpegin | Ling Liu | Stacey Truex | M. E. Gursoy | Vale Tolpegin
[1] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[2] Bo Li,et al. Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach , 2017, Comput. Secur..
[3] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[4] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[5] Susmita Sur-Kolay,et al. Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare , 2015, IEEE Journal of Biomedical and Health Informatics.
[6] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[7] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[8] Amir Houmansadr,et al. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[9] Vitaly Shmatikov,et al. Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[10] Ying Cai,et al. Fake Co-visitation Injection Attacks to Recommender Systems , 2017, NDSS.
[11] Jia Liu,et al. Poisoning Attacks to Graph-Based Recommender Systems , 2018, ACSAC.
[12] Giuseppe Ateniese,et al. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.
[13] Xiangyang Luo,et al. Shielding Collaborative Learning: Mitigating Poisoning Attacks Through Client-Side Detection , 2019, IEEE Transactions on Dependable and Secure Computing.
[14] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, SysML.
[15] Rachid Guerraoui,et al. The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.
[16] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[17] Luis Muñoz-González,et al. Label Sanitization against Label Flipping Poisoning Attacks , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.
[18] Michael P. Wellman,et al. SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).
[19] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[20] Daniel Rueckert,et al. A generic framework for privacy preserving deep learning , 2018, ArXiv.
[21] Prateek Saxena,et al. Auror: defending against poisoning attacks in collaborative deep learning systems , 2016, ACSAC.
[22] Hubert Eichner,et al. Federated Learning for Mobile Keyboard Prediction , 2018, ArXiv.
[23] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[24] Ananda Theertha Suresh,et al. Can You Really Backdoor Federated Learning? , 2019, ArXiv.
[25] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[26] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[27] Claudia Eckert,et al. Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.
[28] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[29] Bo An,et al. Efficient Label Contamination Attacks Against Black-Box Learning Models , 2017, IJCAI.
[30] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[31] Luis Muñoz-González,et al. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection , 2018, ArXiv.
[32] Markus Miettinen,et al. Poisoning Attacks on Federated Learning-based IoT Intrusion Detection System , 2020, Proceedings 2020 Workshop on Decentralized IoT Systems and Security.
[33] Minghong Fang,et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.
[34] Sébastien Marcel,et al. Torchvision the machine-vision package of torch , 2010, ACM Multimedia.
[35] Fabio Roli,et al. Is data clustering in adversarial settings secure? , 2013, AISec.
[36] Lynn A. Karoly,et al. Health Insurance Portability and Accountability Act of 1996 (HIPAA) Administrative Simplification , 2010, Practice Management Consultant.
[37] Tudor Dumitras,et al. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[38] Claudia Eckert,et al. Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.
[39] Fabio Roli,et al. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.
[40] Ling Liu,et al. Towards Demystifying Membership Inference Attacks , 2018, ArXiv.
[41] Youssef Khazbak,et al. MLGuard: Mitigating Poisoning Attacks in Privacy Preserving Distributed Collaborative Learning , 2020, 2020 29th International Conference on Computer Communications and Networks (ICCCN).
[42] Wenqi Wei,et al. Demystifying Membership Inference Attacks in Machine Learning as a Service , 2019, IEEE Transactions on Services Computing.
[43] Ivan Beschastnikh,et al. Mitigating Sybils in Federated Learning Poisoning , 2018, ArXiv.
[44] Song Han,et al. Deep Leakage from Gradients , 2019, NeurIPS.
[45] Yiran Chen,et al. Generative Poisoning Attack Method Against Neural Networks , 2017, ArXiv.
[46] Blaine Nelson,et al. Support Vector Machines Under Adversarial Label Noise , 2011, ACML.
[47] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..
[48] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[49] Chang Liu,et al. Robust Linear Regression Against Training Data Poisoning , 2017, AISec@CCS.
[50] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[51] Claudia Eckert,et al. Support vector machines under adversarial label contamination , 2015, Neurocomputing.
[52] Tom Goldstein,et al. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets , 2019, ICML.
[53] Heiko Ludwig,et al. Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach , 2017, AISec@CCS.
[54] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[55] Alex S. Taylor,et al. Let's Talk About Race: Identity, Chatbots, and AI , 2018, CHI.