A Multi-agent Reinforcement Learning Approach for Efficient Client Selection in Federated Learning

Federated learning (FL) is a training technique that enables client devices to jointly learn a shared model by aggregating locally-computed models without exposing their raw data. While most of the existing work focuses on improving the FL model accuracy, in this paper, we focus on the improving the training efficiency, which is often a hurdle for adopting FL in real-world applications. Specifically, we design an efficient FL framework which jointly optimizes model accuracy, processing latency and communication efficiency, all of which are primary design considerations for real implementation of FL. Inspired by the recent success of Multi-Agent Reinforcement Learning (MARL) in solving complex control problems, we present FedMarl, an MARL-based FL framework which performs efficient run-time client selection. Experiments show that FedMarl can significantly improve model accuracy with much lower processing latency and communication cost.

[1]  Sanglu Lu,et al.  FedDNA: Federated Learning with Decoupled Normalization-Layer Aggregation for Non-IID Data , 2021, ECML/PKDD.

[2]  Heiko Ludwig,et al.  Mitigating Bias in Federated Learning , 2020, ArXiv.

[3]  Yue Zhao,et al.  Federated Learning with Non-IID Data , 2018, ArXiv.

[4]  Zongqing Lu,et al.  Learning Attentional Communication for Multi-Agent Cooperation , 2018, NeurIPS.

[5]  Hubert Eichner,et al.  Towards Federated Learning at Scale: System Design , 2019, SysML.

[6]  Sanja Fidler,et al.  Personalized Federated Learning with First Order Model Optimization , 2020, ICLR.

[7]  Michael J. Watts,et al.  IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS Publication Information , 2020, IEEE Transactions on Neural Networks and Learning Systems.

[8]  Cong Wang,et al.  Optimize Scheduling of Federated Learning on Battery-powered Mobile Devices , 2020, 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS).

[9]  Yoshua Bengio,et al.  On the Spectral Bias of Neural Networks , 2018, ICML.

[10]  Harsha V. Madhyastha,et al.  Oort: Informed Participant Selection for Scalable Federated Learning , 2020, ArXiv.

[11]  Ameet Talwalkar,et al.  Federated Multi-Task Learning , 2017, NIPS.

[12]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[13]  Anit Kumar Sahu,et al.  Federated Optimization in Heterogeneous Networks , 2018, MLSys.

[14]  Haibo Yang,et al.  Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning , 2021, ICLR.

[15]  Hao Wang,et al.  Optimizing Federated Learning on Non-IID Data with Reinforcement Learning , 2020, IEEE INFOCOM 2020 - IEEE Conference on Computer Communications.

[16]  Jie Ding,et al.  HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients , 2020, ICLR.

[17]  Vladimir Braverman,et al.  FetchSGD: Communication-Efficient Federated Learning with Sketching , 2020, ICML.

[18]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[19]  Takayuki Nishio,et al.  Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge , 2018, ICC 2019 - 2019 IEEE International Conference on Communications (ICC).

[20]  Sashank J. Reddi,et al.  SCAFFOLD: Stochastic Controlled Averaging for Federated Learning , 2019, ICML.

[21]  Nageen Himayat,et al.  Coded Federated Learning , 2019, 2019 IEEE Globecom Workshops (GC Wkshps).

[22]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[23]  Qinghua Liu,et al.  Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization , 2020, NeurIPS.

[24]  Zheng Ma,et al.  Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks , 2019, Communications in Computational Physics.

[25]  Ramesh Raskar,et al.  Detailed comparison of communication efficiency of split learning and federated learning , 2019, ArXiv.

[26]  Aryan Mokhtari,et al.  FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization , 2019, AISTATS.

[27]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[28]  Yann Fraboni,et al.  Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning , 2021, ICML.

[29]  Wei Wang,et al.  CMFL: Mitigating Communication Overhead for Federated Learning , 2019, 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS).

[30]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  Klaus-Robert Müller,et al.  Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[32]  Guy Lever,et al.  Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward , 2018, AAMAS.

[33]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.