Privacy-preserving Byzantine-robust federated learning

Abstract Robustness of federated learning has become one of the major concerns since some Byzantine adversaries, who may upload false data owning to unreliable communication channels, corrupted hardware or even malicious attacks, might be concealed in the group of the distributed worker. Meanwhile, it has been proved that membership attacks and reverse attacks against federated learning can lead to privacy leakage of the training data. To address the aforementioned challenges, we propose a privacy-preserving Byzantine-robust federated learning scheme (PBFL) which takes both the robustness of federated learning and the privacy of the workers into account. PBFL is constructed from an existing Byzantine-robust federated learning algorithm and combined with distributed Paillier encryption and zero-knowledge proof to guarantee privacy and filter out anomaly parameters from Byzantine adversaries. Finally, we prove that our scheme provides a higher level of privacy protection compared to the previous Byzantine-robust federated learning algorithms.

[1]  Zvika Brakerski,et al.  Circular and Leakage Resilient Public-Key Encryption Under Subgroup Indistinguishability (or: Quadratic Residuosity Strikes Back) , 2010, IACR Cryptol. ePrint Arch..

[2]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[3]  Hovav Shacham,et al.  Aggregate and Verifiably Encrypted Signatures from Bilinear Maps , 2003, EUROCRYPT.

[4]  Lili Su,et al.  Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent , 2017, Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems.

[5]  Silvio Micali,et al.  The Knowledge Complexity of Interactive Proof Systems , 1989, SIAM J. Comput..

[6]  Sarvar Patel,et al.  Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..

[7]  G. P. Névai,et al.  Mean convergence of Lagrange interpolation, II☆ , 1976 .

[8]  Dan Boneh,et al.  Short Signatures Without Random Oracles , 2004, EUROCRYPT.

[9]  Kamyar Azizzadenesheli,et al.  signSGD with Majority Vote is Communication Efficient And Byzantine Fault Tolerant , 2018, ArXiv.

[10]  Paul Nevai,et al.  Mean convergence of Lagrange interpolation. III , 1984 .

[11]  Shaojie Tang,et al.  Differentially Private Distributed Learning , 2020, INFORMS J. Comput..

[12]  Rafia Inam,et al.  Safety vs. Efficiency: AI-Based Risk Mitigation in Collaborative Robotics , 2020, 2020 6th International Conference on Control, Automation and Robotics (ICCAR).

[13]  Indranil Gupta,et al.  Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation , 2019, UAI.

[14]  Ed Dawson,et al.  Batch zero-knowledge proof and verification and its applications , 2007, TSEC.

[15]  Qiang Yang,et al.  Federated Machine Learning , 2019, ACM Trans. Intell. Syst. Technol..

[16]  Pascal Paillier,et al.  Public-Key Cryptosystems Based on Composite Degree Residuosity Classes , 1999, EUROCRYPT.

[17]  Yang-Wai Chow,et al.  Utilizing QR codes to verify the visual fidelity of image datasets for machine learning , 2021, J. Netw. Comput. Appl..

[18]  Joseph K. Liu,et al.  DeepPAR and DeepDPA: Privacy Preserving and Asynchronous Deep Learning for Industrial IoT , 2020, IEEE Transactions on Industrial Informatics.

[19]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[20]  David Jao,et al.  Boneh-Boyen Signatures and the Strong Diffie-Hellman Problem , 2009, Pairing.

[21]  Kuan-Ching Li,et al.  Secure multiparty learning from the aggregation of locally trained models , 2020, J. Netw. Comput. Appl..

[22]  Yan Zhang,et al.  Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT , 2020, IEEE Transactions on Industrial Informatics.

[23]  Amir Houmansadr,et al.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[24]  Cynthia Dwork,et al.  Practical privacy: the SuLQ framework , 2005, PODS.

[25]  Rachid Guerraoui,et al.  The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.

[26]  Emiliano De Cristofaro,et al.  Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy , 2020, ArXiv.

[27]  Yang-Wai Chow,et al.  Interactive three-dimensional visualization of network intrusion detection data for machine learning , 2020, Future Gener. Comput. Syst..

[28]  Yang Xiang,et al.  Privacy-preserving and verifiable online crowdsourcing with worker updates , 2021, Inf. Sci..

[29]  Waheed Uz Zaman Bajwa,et al.  BYRDIE: A BYZANTINE-RESILIENT DISTRIBUTED LEARNING ALGORITHM , 2018, 2018 IEEE Data Science Workshop (DSW).

[30]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[31]  A. Salman Avestimehr,et al.  Byzantine-Resilient Secure Federated Learning , 2020, IEEE Journal on Selected Areas in Communications.

[32]  Mitsuru Ito,et al.  Secret sharing scheme realizing general access structure , 1989 .

[33]  Jinyuan Jia,et al.  Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.

[34]  P. Erdos,et al.  Carmichael's lambda function , 1991 .

[35]  Rachid Guerraoui,et al.  Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.

[36]  Indranil Gupta,et al.  Generalized Byzantine-tolerant SGD , 2018, ArXiv.

[37]  Abhi Shelat,et al.  Efficient Protocols for Set Membership and Range Proofs , 2008, ASIACRYPT.

[38]  Kunal Talwar,et al.  Mechanism Design via Differential Privacy , 2007, 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07).

[39]  Qing Ling,et al.  RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets , 2018, AAAI.

[40]  Amir Houmansadr,et al.  Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer , 2019, ArXiv.

[41]  Shiho Moriai,et al.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption , 2018, IEEE Transactions on Information Forensics and Security.

[42]  Lang Tong,et al.  Distributed Detection in the Presence of Byzantine Attacks , 2009, IEEE Transactions on Signal Processing.

[43]  G. Giannakis,et al.  Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks , 2019, IEEE Transactions on Signal Processing.

[44]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2016, J. Priv. Confidentiality.