Loss Tolerant Federated Learning

Federated learning has attracted attention in recent years for collaboratively training data on distributed devices with privacy-preservation. The limited network capacity of mobile and IoT devices has been seen as one of the major challenges for cross-device federated learning. Recent solutions have been focusing on threshold-based client selection schemes to guarantee the communication efficiency. However, we find this approach can cause biased client selection and results in deteriorated performance. Moreover, we find that the challenge of network limit may be overstated in some cases and the packet loss is not always harmful. In this paper, we explore the loss tolerant federated learning (LT-FL) in terms of aggregation, fairness, and personalization. We use ThrowRightAway (TRA) to accelerate the data uploading for lowbandwidth-devices by intentionally ignoring some packet losses. The results suggest that, with proper integration, TRA and other algorithms can together guarantee the personalization and fairness performance in the face of packet loss below a certain fraction (10%–30%).

[1]  Nguyen H. Tran,et al.  Personalized Federated Learning with Moreau Envelopes , 2020, NeurIPS.

[2]  Hubert Eichner,et al.  Towards Federated Learning at Scale: System Design , 2019, SysML.

[3]  Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual , 2020, NeurIPS.

[4]  Bin Yu,et al.  Artificial intelligence and statistics , 2018, Frontiers of Information Technology & Electronic Engineering.

[5]  Miao Jiang,et al.  Achieving Outcome Fairness in Machine Learning Models for Social Decision Problems , 2020, IJCAI.

[6]  Shusen Yang,et al.  CDC: Classification Driven Compression for Bandwidth Efficient Edge-Cloud Collaborative Deep Learning , 2020, IJCAI.

[7]  Xin Qin,et al.  FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare , 2019, IEEE Intelligent Systems.

[8]  Takayuki Nishio,et al.  Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge , 2018, ICC 2019 - 2019 IEEE International Conference on Communications (ICC).

[9]  Kai Chen,et al.  Rethinking Transport Layer Design for Distributed Machine Learning , 2019, APNet.

[10]  Lingjuan Lyu,et al.  Collaborative Fairness in Federated Learning , 2020, Federated Learning.

[11]  Aryan Mokhtari,et al.  Personalized Federated Learning: A Meta-Learning Approach , 2020, ArXiv.

[12]  Sashank J. Reddi,et al.  SCAFFOLD: Stochastic Controlled Averaging for Federated Learning , 2019, ICML.

[13]  Sebastian Caldas,et al.  Expanding the Reach of Federated Learning by Reducing Client Resource Requirements , 2018, ArXiv.

[14]  Christopher G. Brinton,et al.  Federated Learning with Communication Delay in Edge Networks , 2020, GLOBECOM 2020 - 2020 IEEE Global Communications Conference.

[15]  Ying-Chang Liang,et al.  Federated Learning in Mobile Edge Networks: A Comprehensive Survey , 2020, IEEE Communications Surveys & Tutorials.

[16]  Francien Dechesne,et al.  EU Personal Data Protection in Policy and Practice , 2019, Information Technology and Law Series.

[17]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[18]  Jean C. Walrand,et al.  Fair end-to-end window-based congestion control , 2000, TNET.