Local-Global Knowledge Distillation in Heterogeneous Federated Learning with Non-IID Data

Federated learning enables multiple clients to collaboratively learn a global model by periodically aggregating the clients’ models without transferring the local data. However, due to the heterogeneity of the system and data, many approaches suffer from the “client-drift” issue that could significantly slow down the convergence of the global model training. As clients perform local updates on heterogeneous data through heterogeneous systems, their local models drift apart. To tackle this issue, one intuitive idea is to guide the local model training by the global teachers, i.e., past global models, where each client learns the global knowledge from past global models via adaptive knowledge distillation techniques. Coming from these insights, we propose a novel approach for heterogeneous federated learning, namely FEDGKD, which fuses the knowledge from historical global models for local training to alleviate the “client-drift” issue. In this paper, we evaluate FEDGKD with extensive experiments on various CV/NLP datasets (i.e., CIFAR10/100, Tiny-ImageNet, AG News, SST5) and different heterogeneous settings. The proposed method is guaranteed to converge under common assumptions, and achieves superior empirical accuracy in fewer communication runs than five stateof-the-art methods.

[1]  Anit Kumar Sahu,et al.  Federated Optimization in Heterogeneous Networks , 2018, MLSys.

[2]  Tzu-Ming Harry Hsu,et al.  Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification , 2019, ArXiv.

[3]  Kejiang Ye,et al.  FFD: A Federated Learning Based Method for Credit Card Fraud Detection , 2019, BigData.

[4]  Bingsheng He,et al.  Model-Contrastive Federated Learning , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Nicholas Kushmerick,et al.  Ensembles of biased classifiers , 2005, ICML.

[6]  Se-Young Yun,et al.  Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning , 2021, ArXiv.

[7]  Yue Zhao,et al.  Federated Learning with Non-IID Data , 2018, ArXiv.

[8]  Xipeng Qiu,et al.  Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation , 2020, Journal of Computer Science and Technology.

[9]  Sashank J. Reddi,et al.  SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning , 2019, ArXiv.

[10]  Qinghua Liu,et al.  Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization , 2020, NeurIPS.

[11]  Rich Caruana,et al.  Model compression , 2006, KDD '06.

[12]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[13]  Kevin Scaman,et al.  Lipschitz regularity of deep neural networks: analysis and efficient estimation , 2018, NeurIPS.

[14]  Suhas Diggavi,et al.  A Field Guide to Federated Optimization , 2021, ArXiv.

[15]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[16]  Sarvar Patel,et al.  Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..

[17]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Klaus-Robert Müller,et al.  Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[19]  Wojciech Samek,et al.  Communication-Efficient Federated Distillation , 2020, ArXiv.

[20]  Venkatesh Saligrama,et al.  Federated Learning Based on Dynamic Regularization , 2021, ICLR.

[21]  Boris Polyak,et al.  Acceleration of stochastic approximation by averaging , 1992 .

[22]  Sebastian U. Stich,et al.  Ensemble Distillation for Robust Model Fusion in Federated Learning , 2020, NeurIPS.

[23]  Wei Shi,et al.  Federated learning of predictive models from federated Electronic Health Records , 2018, Int. J. Medical Informatics.

[24]  Jiayu Zhou,et al.  Data-Free Knowledge Distillation for Heterogeneous Federated Learning , 2021, ICML.

[25]  Seong-Lyun Kim,et al.  Federated Knowledge Distillation , 2020, ArXiv.

[26]  Haomiao Yang,et al.  Efficient and Privacy-Enhanced Federated Learning for Industrial Artificial Intelligence , 2020, IEEE Transactions on Industrial Informatics.

[27]  Eunho Yang,et al.  Federated Continual Learning with Weighted Inter-client Transfer , 2021, ICML.

[28]  Peter Richtárik,et al.  Federated Optimization: Distributed Machine Learning for On-Device Intelligence , 2016, ArXiv.

[29]  Jie Xu,et al.  Federated Learning for Healthcare Informatics , 2019, ArXiv.

[30]  Geoffrey E. Hinton,et al.  A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.

[31]  Wojciech Samek,et al.  FedAUX: Leveraging Unlabeled Auxiliary Data in Federated Learning , 2021, IEEE Transactions on Neural Networks and Learning Systems.

[32]  Xiang Li,et al.  On the Convergence of FedAvg on Non-IID Data , 2019, ICLR.

[33]  Thomas Wolf,et al.  DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.

[34]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[35]  Phillip B. Gibbons,et al.  The Non-IID Data Quagmire of Decentralized Machine Learning , 2019, ICML.

[36]  Jussi Klemel,et al.  Smoothing of Multivariate Data: Density Estimation and Visualization , 2009 .

[37]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..