Toward Smart Security Enhancement of Federated Learning Networks

As traditional centralized learning networks (CLNs) are facing increasing challenges in terms of privacy preservation, communication overheads, and scalability, federated learning networks (FLNs) have been proposed as a promising alternative paradigm to support the training of machine learning (ML) models. In contrast to the centralized data storage and processing in CLNs, FLNs exploit a number of edge devices (EDs) to store data and perform training distributively. In this way, the EDs in FLNs can keep training data locally, which preserves privacy and reduces communication overheads. However, since the model training within FLNs relies on the contribution of all EDs, the training process can be disrupted if some of the EDs upload incorrect or falsified training results, i.e., poisoning attacks. In this paper, we review the vulnerabilities of FLNs, and particularly give an overview of poisoning attacks and mainstream countermeasures. Nevertheless, the existing countermeasures can only provide passive protection and fail to consider the training fees paid for the contributions of the EDs, resulting in a unnecessarily high training cost. Hence, we present a smart security enhancement framework for FLNs. In particular, a verify-before-aggregate (VBA) procedure is developed to identify and remove the non-benign training results from the EDs. Afterward, deep reinforcement learning (DRL) is applied to learn the behaving patterns of the EDs and to actively select the EDs that can provide benign training results and charge low training fees. Simulation results reveal that the proposed framework can protect FLNs effectively and efficiently.

[1]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[2]  Kevin Barraclough,et al.  I and i , 2001, BMJ : British Medical Journal.

[3]  Peter Stone,et al.  Deep Recurrent Q-Learning for Partially Observable MDPs , 2015, AAAI Fall Symposia.

[4]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[5]  Kenneth T. Co,et al.  Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging , 2019, ArXiv.

[6]  Ivan Beschastnikh,et al.  Mitigating Sybils in Federated Learning Poisoning , 2018, ArXiv.

[7]  Ananda Theertha Suresh,et al.  Can You Really Backdoor Federated Learning? , 2019, ArXiv.

[8]  Victor C. M. Leung,et al.  Secure Distributed On-Device Learning Networks with Byzantine Adversaries , 2019, IEEE Network.

[9]  Ying-Chang Liang,et al.  Applications of Deep Reinforcement Learning in Communications and Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[10]  Ying-Chang Liang,et al.  Federated Learning in Mobile Edge Networks: A Comprehensive Survey , 2020, IEEE Communications Surveys & Tutorials.

[11]  Bo Li,et al.  Attack-Resistant Federated Learning with Residual-based Reweighting , 2019, ArXiv.

[12]  Indranil Gupta,et al.  SLSGD: Secure and Efficient Distributed On-device Machine Learning , 2019, ECML/PKDD.

[13]  Qing Ling,et al.  RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets , 2018, AAAI.

[14]  Tianjian Chen,et al.  Federated Machine Learning: Concept and Applications , 2019 .

[15]  Jinyuan Jia,et al.  Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.

[16]  Tianjian Chen,et al.  Learning to Detect Malicious Clients for Robust Federated Learning , 2020, ArXiv.