Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization

Federated learning (FL), as a type of distributed machine learning frameworks, is vulnerable to external attacks on FL models during parameters transmissions. An attacker in FL may control a number of participant clients, and purposely craft the uploaded model parameters to manipulate system outputs, namely, model poisoning (MP). In this paper, we aim to propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms (e.g., Krum and Trimmed mean) implemented at the server without being noticed, i.e., covert MP (CMP). Specifically, we first formulate the MP as an optimization problem by minimizing the Euclidean distance between the manipulated model and designated one, constrained by a defensive aggregation rule. Then, we develop CMP algorithms against different defensive mechanisms based on the solutions of their corresponding optimization problems. Furthermore, to reduce the optimization complexity, we propose low complexity CMP algorithms with a slight performance degradation. In the case that the attacker does not know the defensive aggregation mechanism, we design a blind CMP algorithm, in which the manipulated model will be adjusted properly according to the aggregated model generated by the unknown defensive aggregation. Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.

[1]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..

[2]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[3]  Anit Kumar Sahu,et al.  Federated Learning: Challenges, Methods, and Future Directions , 2019, IEEE Signal Processing Magazine.

[4]  Kin K. Leung,et al.  Adaptive Federated Learning in Resource Constrained Edge Computing Systems , 2018, IEEE Journal on Selected Areas in Communications.

[5]  Tianjian Chen,et al.  Federated Machine Learning: Concept and Applications , 2019 .

[6]  Mohsen Guizani,et al.  Deep Learning for IoT Big Data and Streaming Analytics: A Survey , 2017, IEEE Communications Surveys & Tutorials.

[7]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[8]  Prateek Mittal,et al.  Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.

[9]  Yang Song,et al.  Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning , 2018, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.

[10]  Jun Li,et al.  RDP-GAN: A Rényi-Differential Privacy based Generative Adversarial Network , 2020, ArXiv.

[11]  Jun Li,et al.  Privacy Preservation in Location-Based Services: A Novel Metric and Attack Model , 2018, IEEE Transactions on Mobile Computing.

[12]  H. Vincent Poor,et al.  On Safeguarding Privacy and Security in the Framework of Federated Learning , 2020, IEEE Network.

[13]  Xiangyang Luo,et al.  Shielding Collaborative Learning: Mitigating Poisoning Attacks Through Client-Side Detection , 2019, IEEE Transactions on Dependable and Secure Computing.

[14]  Jinyuan Jia,et al.  Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.

[15]  Yanjiao Chen,et al.  Privacy-Preserving Collaborative Deep Learning With Unreliable Participants , 2020, IEEE Transactions on Information Forensics and Security.

[16]  Rachid Guerraoui,et al.  Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.

[17]  Calton Pu,et al.  Differentially Private Model Publishing for Deep Learning , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[18]  Kannan Ramchandran,et al.  Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.

[19]  Vitaly Shmatikov,et al.  How To Backdoor Federated Learning , 2018, AISTATS.

[20]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[21]  Vitaly Shmatikov,et al.  Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[22]  Hubert Eichner,et al.  Federated Learning for Mobile Keyboard Prediction , 2018, ArXiv.

[23]  Fabio Roli,et al.  Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.

[24]  Di Wu,et al.  PDGAN: A Novel Poisoning Defense Method in Federated Learning Using Generative Adversarial Network , 2019, ICA3PP.

[25]  H. Vincent Poor,et al.  Enabling AI in Future Wireless Networks: A Data Life Cycle Perspective , 2020, IEEE Communications Surveys & Tutorials.

[26]  Athanasios V. Vasilakos,et al.  IoT-Based Big Data Storage Systems in Cloud Computing: Perspectives and Challenges , 2017, IEEE Internet of Things Journal.

[27]  H. Vincent Poor,et al.  Federated Learning With Differential Privacy: Algorithms and Performance Analysis , 2019, IEEE Transactions on Information Forensics and Security.