Scalable Average Consensus with Compressed Communications

We propose a new decentralized average consensus algorithm with compressed communication that scales linearly with the network size n. We prove that the proposed method converges to the average of the initial values held locally by the agents of a network when agents are allowed to communicate with compressed messages. The proposed algorithm works for a broad class of compression operators (possibly biased), where agents interact over arbitrary static, undirected, and connected networks. We further present numerical experiments that confirm our theoretical results and illustrate the scalability and communication efficiency of our algorithm.

[1]  R. Srikant,et al.  Quantized Consensus , 2006, 2006 IEEE International Symposium on Information Theory.

[2]  Martin Jaggi,et al.  A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! , 2020, AISTATS.

[3]  Peter Richtárik,et al.  On Biased Compression for Distributed Learning , 2020, ArXiv.

[4]  Angelia Nedic,et al.  Graph-Theoretic Analysis of Belief System Dynamics under Logic Constraints , 2018, Scientific Reports.

[5]  Stephen P. Boyd,et al.  A scheme for robust distributed sensor fusion based on average consensus , 2005, IPSN 2005. Fourth International Symposium on Information Processing in Sensor Networks, 2005..

[6]  Ruggero Carli,et al.  Average consensus on networks with quantized communication , 2009 .

[7]  Mohammad Taha Toghani,et al.  Communication-Efficient Distributed Cooperative Learning With Compressed Beliefs , 2021, IEEE Transactions on Control of Network Systems.

[8]  Sandro Zampieri,et al.  An efficient quantization algorithm for solving average-consensus problems , 2009, 2009 European Control Conference (ECC).

[9]  T. C. Aysal,et al.  Distributed Average Consensus With Dithered Quantization , 2008, IEEE Transactions on Signal Processing.

[10]  Alexander Olshevsky,et al.  Linear Time Average Consensus and Distributed Optimization on Fixed Graphs , 2017, SIAM J. Control. Optim..

[11]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[12]  Kai Cai,et al.  Quantized Consensus and Averaging on Gossip Digraphs , 2011, IEEE Transactions on Automatic Control.

[13]  Ruggero Carli,et al.  Quantized average consensus via dynamic coding/decoding schemes , 2008, 2008 47th IEEE Conference on Decision and Control.

[14]  Aryan Mokhtari,et al.  Quantized Decentralized Stochastic Learning over Directed Graphs , 2020, ICML.

[15]  Anusha Lalitha,et al.  Fully Decentralized Federated Learning , 2018 .

[16]  John N. Tsitsiklis,et al.  On distributed averaging algorithms and quantization effects , 2007, 2008 47th IEEE Conference on Decision and Control.

[17]  Kai Cai,et al.  Average Consensus on Arbitrary Strongly Connected Digraphs With Time-Varying Topologies , 2013, IEEE Transactions on Automatic Control.

[18]  Suhas Diggavi,et al.  A Field Guide to Federated Optimization , 2021, ArXiv.

[19]  Dan Alistarh,et al.  QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks , 2016, 1610.02132.

[20]  Jitender S. Deogun,et al.  Localization and Tracking in Sensor Systems , 2006, IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing (SUTC'06).

[21]  John S. Heidemann,et al.  Time Synchronization for High Latency Acoustic Networks , 2006, Proceedings IEEE INFOCOM 2006. 25TH IEEE International Conference on Computer Communications.

[22]  Stephen P. Boyd,et al.  Fast linear iterations for distributed averaging , 2003, 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475).

[23]  Angelia Nedić,et al.  Fast Convergence Rates for Distributed Non-Bayesian Learning , 2015, IEEE Transactions on Automatic Control.

[24]  Wei Shi,et al.  Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs , 2016, SIAM J. Optim..

[25]  Kunihiko Sadakane,et al.  The hitting and cover times of Metropolis walks , 2010, Theor. Comput. Sci..

[26]  Asuman E. Ozdaglar,et al.  Constrained Consensus and Optimization in Multi-Agent Networks , 2008, IEEE Transactions on Automatic Control.

[27]  Martin Jaggi,et al.  Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication , 2019, ICML.

[28]  Ming Yan,et al.  Compressed Gradient Tracking for Decentralized Optimization Over General Directed Networks , 2021, ArXiv.

[29]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..