Defending Poisoning Attacks in Federated Learning via Adversarial Training Method

Recently, federated learning has shown its significant advantages in protecting training data privacy by maintaining a joint model across multiple clients. However, its model security issues have not only been recently explored but shown that federated learning exhibits inherent vulnerabilities on the active attacks launched by malicious participants. Poisoning is one of the most powerful active attacks where an inside attacker can upload the crafted local model updates to further impact the global model performance. In this paper, we first illustrate how the poisoning attack works in the context of federated learning. Then, we correspondingly propose a defense method that mainly relies upon a well-researched adversarial training technique: pivotal training, which improves the robustness of the global model with poisoned local updates. The main contribution of this work is that the countermeasure method is simple and scalable since it does not require complex accuracy validations, while only changing the optimization objectives and loss functions. We finally demonstrate the effectiveness of our proposed mitigation mechanisms through extensive experiments.

[1]  Prateek Saxena,et al.  Auror: defending against poisoning attacks in collaborative deep learning systems , 2016, ACSAC.

[2]  Di Wu,et al.  PDGAN: A Novel Poisoning Defense Method in Federated Learning Using Generative Adversarial Network , 2019, ICA3PP.

[3]  Gilles Louppe,et al.  Learning to Pivot with Adversarial Networks , 2016, NIPS.

[4]  Heiko Ludwig,et al.  Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach , 2017, AISec@CCS.

[5]  Ying-Chang Liang,et al.  Federated Learning in Mobile Edge Networks: A Comprehensive Survey , 2020, IEEE Communications Surveys & Tutorials.

[6]  Susmita Sur-Kolay,et al.  Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare , 2015, IEEE Journal of Biomedical and Health Informatics.

[7]  Amir Houmansadr,et al.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[8]  Miriam A. M. Capretz,et al.  MLaaS: Machine Learning as a Service , 2015, 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA).

[9]  Olga Ohrimenko,et al.  Contamination Attacks and Mitigation in Multi-Party Machine Learning , 2018, NeurIPS.

[10]  Ben Y. Zhao,et al.  Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[11]  Vitaly Shmatikov,et al.  Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[12]  Qiang Yang,et al.  Federated Machine Learning , 2019, ACM Trans. Intell. Syst. Technol..

[13]  Ivor W. Tsang,et al.  On the Convergence of a Family of Robust Losses for Stochastic Gradient Descent , 2016, ECML/PKDD.

[14]  Percy Liang,et al.  Certified Defenses for Data Poisoning Attacks , 2017, NIPS.

[15]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[16]  J. Doug Tygar,et al.  Adversarial machine learning , 2019, AISec '11.

[17]  Philip S. Yu,et al.  Multi-View Fusion with Extreme Learning Machine for Clustering , 2019, ACM Trans. Intell. Syst. Technol..

[18]  Bing Chen,et al.  Poisoning Attack in Federated Learning using Generative Adversarial Nets , 2019, 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE).

[19]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.