WAFFLE: Weighted Averaging for Personalized Federated Learning

In federated learning, model personalization can be a very effective strategy to deal with heterogeneous training data across clients. We introduce WAFFLE (Weighted Averaging For Federated LEarning), a personalized collaborative machine learning algorithm that leverages stochastic control variates for faster convergence. WAFFLE uses the Euclidean distance between clients’ updates to weigh their individual contributions and thus minimize the personalized model loss on the specific agent of interest. Through a series of experiments, we compare our new approach to two recent personalized federated learning methods—Weight Erosion and APFL—as well as two general FL methods—Federated Averaging and SCAFFOLD. Performance is evaluated using two categories of non-identical client data distributions— concept shift and label skew—on two image data sets (MNIST and CIFAR10). Our experiments demonstrate the comparative effectiveness of WAFFLE, as it achieves or improves accuracy with faster convergence.

[1]  Trevor Darrell,et al.  Deep Layer Aggregation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[2]  Sashank J. Reddi,et al.  SCAFFOLD: Stochastic Controlled Averaging for Federated Learning , 2019, ICML.

[3]  Sai Praneeth Karimireddy,et al.  Personalized Federated Image classification using Weight Erosion , 2020 .

[4]  Aryan Mokhtari,et al.  Personalized Federated Learning: A Meta-Learning Approach , 2020, ArXiv.

[5]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..

[6]  Jon Kleinberg,et al.  Model-sharing Games: Analyzing Federated Learning Under Voluntary Participation , 2020, AAAI.

[7]  Virginia Smith,et al.  Ditto: Fair and Robust Federated Learning Through Personalization , 2020, ICML.

[8]  Martin Jaggi,et al.  Optimal Model Averaging: Towards Personalized Collaborative Learning , 2021, ArXiv.

[9]  Ameet Talwalkar,et al.  Federated Multi-Task Learning , 2017, NIPS.

[10]  Milind Kulkarni,et al.  Survey of Personalization Techniques for Federated Learning , 2020, 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4).

[11]  Sergey Levine,et al.  Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.

[12]  Hubert Eichner,et al.  Federated Evaluation of On-device Personalization , 2019, ArXiv.

[13]  Martin Jaggi,et al.  Weight Erosion: An Update Aggregation Scheme for Personalized Collaborative Machine Learning , 2020, DART/DCL@MICCAI.

[14]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[15]  Aryan Mokhtari,et al.  Exploiting Shared Representations for Personalized Federated Learning , 2021, ICML.

[16]  Lawrence D. Jackel,et al.  Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.

[17]  Sanja Fidler,et al.  Personalized Federated Learning with First Order Model Optimization , 2020, ICLR.

[18]  Mehrdad Mahdavi,et al.  Adaptive Personalized Federated Learning , 2020, ArXiv.