Mixed Quantization Enabled Federated Learning to Tackle Gradient Inversion Attacks

Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing. But this approach shows vulnerabilities when gradient inversion attacks are applied to it. FL models are at higher risk in the event of a gradient inversion attacks, which has a higher success rate in retrieving sensitive data from the model gradients, due to the presence of communication in their inherent architecture. The most alarming thing about this gradient inversion attack is that it can be performed in such a covert way that it does not hamper the training performance while the attackers backtrack from the gradients to get information about the raw data. Some of the common existing approaches proposed to prevent data reconstruction in the context of FL are adding noise with differential privacy, homomorphic encryption, and gradient pruning. These approaches suffer from some major drawbacks, including a tedious key generation process during encryption with an increasing number of clients, a significant performance drop, and difficulty in selecting a suitable pruning ratio. As a countermeasure, we propose a mixed quantization enabled FL scheme, and we empirically show that issues addressed above can be resolved. In addition, our approach can ensure more robustness as different layers of the deep model are quantized with different precisions and quantization modes. We empirically proved the validity of our defense method against both the iteration based and recursion based gradient inversion attacks and evaluated the performance of our proposed FL framework on three benchmark datasets and found out that our approach outperformed the baseline defense mechanisms.

[1]  Carl E. Busart,et al.  Secure Federated Training: Detecting Compromised Nodes and Identifying the Type of Attacks , 2022, 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA).

[2]  Pretom Roy Ovi,et al.  Towards developing a data security aware federated training framework in multi-modal contested environments , 2022, Defense + Commercial Sensing.

[3]  Cangxiong Chen,et al.  Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification , 2021, ArXiv.

[4]  Wenqi Wei,et al.  Gradient-Leakage Resilient Federated Learning , 2021, 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS).

[5]  Pavlo Molchanov,et al.  See through Gradients: Image Batch Recovery via GradInversion , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Quan Qian,et al.  Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning , 2021, Future Internet.

[7]  Matthew B. Blaschko,et al.  R-GAP: Recursive Gradient Attack on Privacy , 2020, ICLR.

[8]  Wenqi Wei,et al.  A Framework for Evaluating Client Privacy Leakages in Federated Learning , 2020, ESORICS.

[9]  Emiliano De Cristofaro,et al.  Local and Central Differential Privacy for Robustness and Privacy in Federated Learning , 2020, NDSS.

[10]  Yong Li,et al.  Privacy-Preserving Federated Learning Framework Based on Chained Secure Multiparty Computing , 2020, IEEE Internet of Things Journal.

[11]  Xun Yi,et al.  Towards privacy preserving AI based composition framework in edge networks using fully homomorphic encryption , 2020, Eng. Appl. Artif. Intell..

[12]  Saraju P. Mohanty,et al.  Preserving Data Privacy via Federated Learning: Challenges and Solutions , 2020, IEEE Consumer Electronics Magazine.

[13]  Michael Moeller,et al.  Inverting Gradients - How easy is it to break privacy in federated learning? , 2020, NeurIPS.

[14]  Bo Zhao,et al.  iDLG: Improved Deep Leakage from Gradients , 2020, ArXiv.

[15]  B. Faltings,et al.  Federated Learning with Bayesian Differential Privacy , 2019, 2019 IEEE International Conference on Big Data (Big Data).

[16]  Yu Zhou,et al.  Privacy Preserving Distributed Data Mining Based on Secure Multi-party Computation , 2019, 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT).

[17]  Wei Yang Bryan Lim,et al.  Federated Learning in Mobile Edge Networks: A Comprehensive Survey , 2019, IEEE Communications Surveys & Tutorials.

[18]  Song Han,et al.  Deep Leakage from Gradients , 2019, NeurIPS.

[19]  Vitaly Shmatikov,et al.  Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[20]  Shiho Moriai,et al.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.

[21]  Tassilo Klein,et al.  Differentially Private Federated Learning: A Client Level Perspective , 2017, ArXiv.

[22]  H. Brendan McMahan,et al.  Learning Differentially Private Recurrent Language Models , 2017, ICLR.

[23]  Mauro Conti,et al.  A Survey on Homomorphic Encryption Schemes , 2017, ACM Comput. Surv..

[24]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[25]  Anand D. Sarwate,et al.  Stochastic gradient descent with differentially private updates , 2013, 2013 IEEE Global Conference on Signal and Information Processing.

[26]  Mikael Olsson Gradients , 2019, CSS3 Quick Syntax Reference.