Impact of Network Topology on the Convergence of Decentralized Federated Learning Systems

Federated learning is a popular framework that enables harvesting edge resources' computational power to train a machine learning model distributively. However, it is not always feasible or profitable to have a centralized server that controls and synchronizes the training process. In this paper, we consider the problem of training a machine learning model over a network of nodes in a fully decentralized fashion. In particular, we look for empirical evidence on how sensitive is the training process for various network characteristics and communication parameters. We present the outcome of several simulations conducted with different network topologies, datasets, and machine learning models.

[1]  Mauro Conti,et al.  A Survey on Homomorphic Encryption Schemes: Theory and Implementation , 2017 .

[2]  Weisong Shi,et al.  Edge Computing: Vision and Challenges , 2016, IEEE Internet of Things Journal.

[3]  Anusha Lalitha,et al.  Fully Decentralized Federated Learning , 2018 .

[4]  Aryan Mokhtari,et al.  FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization , 2019, AISTATS.

[5]  Ibrar Yaqoob,et al.  Big IoT Data Analytics: Architecture, Opportunities, and Open Research Challenges , 2017, IEEE Access.

[6]  Tianjian Chen,et al.  Federated Machine Learning: Concept and Applications , 2019 .

[7]  Monica Nicoli,et al.  Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks , 2019, IEEE Internet of Things Journal.

[8]  Lars Schmidt-Thieme,et al.  Ring-Star: A Sparse Topology for Faster Model Averaging in Decentralized Parallel SGD , 2019, PKDD/ECML Workshops.

[9]  Tao Zhang,et al.  Fog and IoT: An Overview of Research Opportunities , 2016, IEEE Internet of Things Journal.

[10]  Abhishek Chandra,et al.  Accelerated Training via Device Similarity in Federated Learning , 2021, EdgeSys@EuroSys.

[11]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[12]  Ying-Chang Liang,et al.  Federated Learning in Mobile Edge Networks: A Comprehensive Survey , 2020, IEEE Communications Surveys & Tutorials.

[13]  Jingyan Jiang,et al.  Decentralized Federated Learning: A Segmented Gossip Approach , 2019, ArXiv.

[14]  Carlo Meghini,et al.  Edge-Based Video Surveillance with Embedded Devices , 2020, SEBD.

[15]  István Hegedüs,et al.  Decentralized learning works: An empirical comparison of gossip learning and federated learning , 2021, J. Parallel Distributed Comput..

[16]  Rajkumar Buyya,et al.  Market-oriented Grids and Utility Computing: The State-of-the-art and Future Directions , 2008, Journal of Grid Computing.

[17]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[18]  Zeyi Tao,et al.  A survey of federated learning for edge computing: Research problems and solutions , 2021, High-Confidence Computing.

[19]  Chiara Renso,et al.  Analytics Everywhere: Generating Insights From the Internet of Things , 2019, IEEE Access.

[20]  Martin Jaggi,et al.  Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication , 2019, ICML.

[21]  Wei Zhang,et al.  Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent , 2017, NIPS.

[22]  Anit Kumar Sahu,et al.  Federated Learning: Challenges, Methods, and Future Directions , 2019, IEEE Signal Processing Magazine.