Beyond spectral gap: The role of the topology in decentralized learning

In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster. We consider the setting in which all workers sample from the same dataset, and communicate over a sparse graph (decentralized). In this setting, current theory fails to capture important aspects of real-world behavior. First, the 'spectral gap' of the communication graph is not predictive of its empirical performance in (deep) learning. Second, current theory does not explain that collaboration enables larger learning rates than training alone. In fact, it prescribes smaller learning rates, which further decrease as graphs become larger, failing to explain convergence in infinite graphs. This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution. We quantify how the graph topology influences convergence in a quadratic toy problem and provide theoretical results for general smooth and (strongly) convex objectives. Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies.

[1]  Sebastian U. Stich,et al.  Data-heterogeneity-aware Mixing for Decentralized Learning , 2022, ArXiv.

[2]  Kun Yuan,et al.  Exponential Graph is Provably Efficient for Decentralized Deep Training , 2021, NeurIPS.

[3]  Sebastian U. Stich,et al.  RelaySum for Decentralized Deep Learning on Heterogeneous Data , 2021, NeurIPS.

[4]  Martin Jaggi,et al.  Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data , 2021, ICML.

[5]  Christopher De Sa,et al.  Optimal Complexity in Decentralized Training , 2020, ICML.

[6]  Martin Jaggi,et al.  A Unified Theory of Decentralized SGD with Changing Topology and Local Updates , 2020, ICML.

[7]  Don Towsley,et al.  Decentralized gradient methods: does topology matter? , 2020, AISTATS.

[8]  Colin Wei,et al.  Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks , 2019, NeurIPS.

[9]  Anit Kumar Sahu,et al.  MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling , 2019, 2019 Sixth Indian Control Conference (ICC).

[10]  Dominic Richards,et al.  Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up , 2019, NeurIPS.

[11]  Michael G. Rabbat,et al.  Stochastic Gradient Push for Distributed Deep Learning , 2018, ICML.

[12]  Patrick Rebeschini,et al.  Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent , 2018, J. Mach. Learn. Res..

[13]  Francis Bach,et al.  Accelerated Gossip in Networks of Given Dimension Using Jacobi Polynomial Iterations , 2018, SIAM J. Math. Data Sci..

[14]  Xiangru Lian,et al.  D2: Decentralized Training over Decentralized Data , 2018, ICML.

[15]  Wei Zhang,et al.  Asynchronous Decentralized Parallel Stochastic Gradient Descent , 2017, ICML.

[16]  Roland Vollgraf,et al.  Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.

[17]  Wei Zhang,et al.  Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent , 2017, NIPS.

[18]  Gesualdo Scutari,et al.  NEXT: In-Network Nonconvex Optimization , 2016, IEEE Transactions on Signal and Information Processing over Networks.

[19]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[20]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[21]  Burleigh B. Gardner,et al.  Deep South: A Social Anthropological Study of Caste and Class , 1942 .

[22]  A. Bellet,et al.  Yes, Topology Matters in Decentralized Optimization: Refined Convergence and Topology Learning under Heterogeneous Data , 2022, ArXiv.

[23]  S. Kakade,et al.  On the duality of strong convexity and strong smoothness : Learning applications and matrix regularization , 2009 .