Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning
-
爱吃猫的鱼0于 2022年4月5日 21:40
Peter Kairouz | Amir Houmansadr | Daniel Ramage | Virat Shejwalkar | D. Ramage | P. Kairouz | Virat Shejwalkar | A. Houmansadr | Daniel Ramage
[1] Qing Ling,et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets , 2018, AAAI.
[2] Virginia Smith,et al. Ditto: Fair and Robust Federated Learning Through Personalization , 2020, ICML.
[3] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[4] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[5] Xiaoyu Cao,et al. Provably Secure Federated Learning against Malicious Clients , 2021, AAAI.
[6] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, MLSys.
[7] Yiran Chen,et al. Generative Poisoning Attack Method Against Neural Networks , 2017, ArXiv.
[8] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[9] Kamyar Azizzadenesheli,et al. signSGD with Majority Vote is Communication Efficient and Fault Tolerant , 2018, ICLR.
[10] Vitaly Shmatikov,et al. How To Backdoor Federated Learning , 2018, AISTATS.
[11] Dimitris S. Papailiopoulos,et al. DRACO: Byzantine-resilient Distributed Training via Redundant Gradients , 2018, ICML.
[12] Amir Houmansadr,et al. Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning , 2021, NDSS.
[13] Claudia Eckert,et al. Support vector machines under adversarial label contamination , 2015, Neurocomputing.
[14] Prateek Mittal,et al. Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.
[15] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[16] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[17] Rachid Guerraoui,et al. The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.
[18] Farinaz Koushanfar,et al. A Taxonomy of Attacks on Federated Learning , 2021, IEEE Security & Privacy.
[19] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[20] Bo Li,et al. Attack-Resistant Federated Learning with Residual-based Reweighting , 2019, ArXiv.
[21] Minghao Chen,et al. CRFL: Certifiably Robust Federated Learning against Backdoor Attacks , 2021, ICML.
[22] Vitaly Shmatikov,et al. Salvaging Federated Learning by Local Adaptation , 2020, ArXiv.
[23] J. Doug Tygar,et al. Adversarial machine learning , 2019, AISec '11.
[24] Gregory Cohen,et al. EMNIST: Extending MNIST to handwritten letters , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).
[25] Jinyuan Jia,et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.
[26] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[27] Rachid Guerraoui,et al. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.
[28] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..
[29] Suhas Diggavi,et al. Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data , 2020, 2021 IEEE International Symposium on Information Theory (ISIT).
[30] Chang Liu,et al. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[31] Claudia Eckert,et al. Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.
[32] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[33] Ivan Beschastnikh,et al. The Limitations of Federated Learning in Sybil Settings , 2020, RAID.
[34] Sebastian Caldas,et al. LEAF: A Benchmark for Federated Settings , 2018, ArXiv.
[35] Kartik Sreenivasan,et al. Attack of the Tails: Yes, You Really Can Backdoor Federated Learning , 2020, NeurIPS.
[36] Mehmet Emre Gursoy,et al. Data Poisoning Attacks Against Federated Learning Systems , 2020, ESORICS.
[37] Dan Alistarh,et al. Byzantine Stochastic Gradient Descent , 2018, NeurIPS.
[38] Amir Houmansadr,et al. Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer , 2019, ArXiv.
[39] Rogier C. van Dalen,et al. Federated Evaluation and Tuning for On-Device Personalization: System Design & Applications , 2021, ArXiv.
[40] H. Brendan McMahan,et al. Learning Differentially Private Recurrent Language Models , 2017, ICLR.
[41] Luis Muñoz-González,et al. Poisoning Attacks with Generative Adversarial Nets , 2019, ArXiv.
[42] Cristina Nita-Rotaru,et al. On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis , 2014, AISec '14.
[43] Ananda Theertha Suresh,et al. Can You Really Backdoor Federated Learning? , 2019, ArXiv.
[44] Qiang Wang,et al. Data Poisoning Attacks on Federated Machine Learning , 2020, IEEE Internet of Things Journal.
[45] Heiko Ludwig,et al. IBM Federated Learning: an Enterprise Framework White Paper V0.1 , 2020, ArXiv.
[46] Sencun Zhu,et al. Mitigating Backdoor Attacks in Federated Learning , 2020, ArXiv.
[47] Zaïd Harchaoui,et al. Robust Aggregation for Federated Learning , 2019, IEEE Transactions on Signal Processing.
[48] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[49] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[50] Sebastian U. Stich,et al. Ensemble Distillation for Robust Model Fusion in Federated Learning , 2020, NeurIPS.
[51] Bo Li,et al. DBA: Distributed Backdoor Attacks against Federated Learning , 2020, ICLR.
[52] T. Minka. Estimating a Dirichlet distribution , 2012 .
[53] Indranil Gupta,et al. Generalized Byzantine-tolerant SGD , 2018, ArXiv.
[54] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[55] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[56] Moran Baruch,et al. A Little Is Enough: Circumventing Defenses For Distributed Learning , 2019, NeurIPS.
[57] Rachid Guerraoui,et al. SGD: Decentralized Byzantine Resilience , 2019, ArXiv.
[58] Micah Goldblum,et al. Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses , 2020, ArXiv.
[59] Manzil Zaheer,et al. Adaptive Federated Optimization , 2020, ICLR.
[60] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.