A Chebyshev-Accelerated Primal-Dual Method for Distributed Optimization

We consider a distributed optimization problem over a network of agents aiming to minimize a global objective function that is the sum of local convex and composite cost functions. To this end, we propose a distributed Chebyshev-accelerated primal-dual algorithm to achieve faster ergodic convergence rates. In standard distributed primal-dual algorithms, the speed of convergence towards a global optimum (i.e., a saddle point in the corresponding Lagrangian function) is directly influenced by the eigenvalues of the Laplacian matrix representing the communication graph. In this paper, we use Chebyshev matrix polynomials to generate gossip matrices whose spectral properties result in faster convergence speeds, while allowing for a fully distributed implementation. As a result, the proposed algorithm requires fewer gradient updates at the cost of additional rounds of communications between agents. We illustrate the performance of the proposed algorithm in a distributed signal recovery problem. Our simulations show how the use of Chebyshev matrix polynomials can be used to improve the convergence speed of a primal-dual algorithm over communication networks, especially in networks with poor spectral properties, by trading local computation by communication rounds.

[1]  Erfan Yazdandoost Hamedani,et al.  A primal-dual method for conic constrained distributed optimization problems , 2016, NIPS.

[2]  Jing Wang,et al.  Control approach to distributed optimization , 2010, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[3]  Qing Ling,et al.  EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization , 2014, 1404.6264.

[4]  Stephen P. Boyd,et al.  Preconditioning in fast dual gradient methods , 2014, 53rd IEEE Conference on Decision and Control.

[5]  Yoram Singer,et al.  Efficient projections onto the l1-ball for learning in high dimensions , 2008, ICML '08.

[6]  Stephen P. Boyd,et al.  An Interior-Point Method for Large-Scale $\ell_1$-Regularized Least Squares , 2007, IEEE Journal of Selected Topics in Signal Processing.

[7]  Haitham Bou-Ammar,et al.  Distributed Newton Method for Large-Scale Consensus Optimization , 2016, IEEE Transactions on Automatic Control.

[8]  Stephen P. Boyd,et al.  Proximal Algorithms , 2013, Found. Trends Optim..

[9]  Fernando Paganini,et al.  Stability of primal-dual gradient dynamics and applications to network optimization , 2010, Autom..

[10]  Sonia Martínez,et al.  Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication , 2014, Autom..

[11]  Louis A. Hageman,et al.  Iterative Solution of Large Linear Systems. , 1971 .

[12]  E. Montijano,et al.  Fast distributed consensus with Chebyshev polynomials , 2011, Proceedings of the 2011 American Control Conference.

[13]  Laurent Massoulié,et al.  Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks , 2017, ICML.

[14]  Maojiao Ye,et al.  Distributed Economic Dispatch Control via Saddle Point Dynamics and Consensus Algorithms , 2019, IEEE Transactions on Control Systems Technology.

[15]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[16]  Shiqian Ma,et al.  On the Global Linear Convergence of the ADMM with MultiBlock Variables , 2014, SIAM J. Optim..

[17]  Alejandro Ribeiro,et al.  Distributed Smooth and Strongly Convex Optimization with Inexact Dual Methods , 2018, 2018 Annual American Control Conference (ACC).

[18]  Feng Yan,et al.  Distributed Autonomous Online Learning: Regrets and Intrinsic Privacy-Preserving Properties , 2010, IEEE Transactions on Knowledge and Data Engineering.

[19]  Qing Ling,et al.  On the Convergence of Decentralized Gradient Descent , 2013, SIAM J. Optim..

[20]  Karl Henrik Johansson,et al.  Distributed Event-Triggered Control for Multi-Agent Systems , 2012, IEEE Transactions on Automatic Control.

[21]  Nicholas R. Jennings,et al.  Consensus acceleration in multiagent systems with the Chebyshev semi-iterative method , 2011, AAMAS.

[22]  Antonin Chambolle,et al.  On the ergodic convergence rates of a first-order primal–dual algorithm , 2016, Math. Program..

[23]  Alejandro Ribeiro,et al.  Accelerated Dual Descent for Network Flow Optimization , 2014, IEEE Transactions on Automatic Control.