On Centralized and Distributed Mirror Descent: Convergence Analysis Using Quadratic Constraints

Mirror descent (MD) is a powerful first-order optimization technique that subsumes several optimization algorithms including gradient descent (GD). In this work, we develop a semi-definite programming (SDP) framework to analyze the convergence rate of MD in centralized and distributed settings under both strongly convex and non-strongly convex assumptions. We view MD with a dynamical system lens and leverage quadratic constraints (QCs) to provide explicit convergence rates based on Lyapunov stability. For centralized MD under strongly convex assumption, we develop a SDP that certifies exponential convergence rates. We prove that the SDP always has a feasible solution that recovers the optimal GD rate as a special case. We complement our analysis by providing the O(1/k) convergence rate for convex problems. Next, we analyze the convergence of distributed MD and characterize the rate using SDP. To the best of our knowledge, the numerical rate of distributed MD has not been previously reported in the literature. We further prove an O(1/k) convergence rate for distributed MD in the convex setting. Our numerical experiments on strongly convex problems indicate that our framework certifies superior convergence rates compared to the existing rates for distributed GD.

[1]  Mikhail Belkin,et al.  Linear Convergence and Implicit Regularization of Generalized Mirror Descent with Time-Dependent Mirrors , 2020, ArXiv.

[2]  Leon Hirsch,et al.  Fundamentals Of Convex Analysis , 2016 .

[3]  John Darzentas,et al.  Problem Complexity and Method Efficiency in Optimization , 1983 .

[4]  A. Rantzer,et al.  System analysis via integral quadratic constraints , 1997, IEEE Trans. Autom. Control..

[5]  A. Benfenati,et al.  Proximal approaches for matrix optimization problems: Application to robust precision matrix estimation , 2017, Signal Process..

[6]  Benjamin Recht,et al.  Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints , 2014, SIAM J. Optim..

[7]  Bryan Van Scoy,et al.  Analysis and Design of First-Order Distributed Optimization Algorithms Over Time-Varying Graphs , 2019, IEEE Transactions on Control of Network Systems.

[8]  Shahin Shahrampour,et al.  Distributed Mirror Descent with Integral Feedback: Asymptotic Convergence Analysis of Continuous-time Dynamics , 2020, 2021 American Control Conference (ACC).

[9]  Sonia Martínez,et al.  Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication , 2014, Autom..

[10]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[11]  Alejandro Ribeiro,et al.  Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems , 2017, SIAM J. Optim..

[12]  Shahin Shahrampour,et al.  Distributed Online Optimization in Dynamic Environments Using Mirror Descent , 2016, IEEE Transactions on Automatic Control.

[13]  Asuman E. Ozdaglar,et al.  Constrained Consensus and Optimization in Multi-Agent Networks , 2008, IEEE Transactions on Automatic Control.

[14]  Yongduan Song,et al.  Distributed multi-agent optimization subject to nonidentical constraints and communication delays , 2016, Autom..

[15]  Shahin Shahrampour,et al.  On Centralized and Distributed Mirror Descent: Exponential Convergence Analysis Using Quadratic Constraints , 2021, ArXiv.

[16]  Polina Golland,et al.  Convex Clustering with Exemplar-Based Models , 2007, NIPS.

[17]  Mihailo R. Jovanovic,et al.  The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization , 2016, IEEE Transactions on Automatic Control.

[18]  Bin Hu,et al.  Robust convergence analysis of distributed optimization algorithms , 2017, 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[19]  Qing Ling,et al.  EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization , 2014, 1404.6264.

[20]  Michael G. Rabbat,et al.  Multi-agent mirror descent for decentralized stochastic optimization , 2015, 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP).

[21]  Elad Hazan,et al.  Introduction to Online Convex Optimization , 2016, Found. Trends Optim..

[22]  Angelia Nedic,et al.  Distributed stochastic gradient tracking methods , 2018, Mathematical Programming.

[23]  Yiguang Hong,et al.  Distributed Mirror Descent for Online Composite Optimization , 2020, IEEE Transactions on Automatic Control.

[24]  Adrien B. Taylor,et al.  Smooth strongly convex interpolation and exact worst-case performance of first-order methods , 2015, Mathematical Programming.

[25]  Asuman E. Ozdaglar,et al.  Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions , 2018, SIAM J. Optim..

[26]  Christian Ebenbauer,et al.  Convex Synthesis of Accelerated Gradient Algorithms , 2021, SIAM Journal on Control and Optimization.

[27]  Shahin Shahrampour,et al.  Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems , 2020, 2021 60th IEEE Conference on Decision and Control (CDC).

[28]  Behçet Açıkmeşe,et al.  RLC Circuits-Based Distributed Mirror Descent Method , 2020, IEEE Control Systems Letters.

[29]  Bahman Gharesifard,et al.  Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs , 2012, IEEE Transactions on Automatic Control.

[30]  Bin Hu,et al.  Dissipativity Theory for Nesterov's Accelerated Method , 2017, ICML.

[31]  Ying Sun,et al.  Convergence Rate of Distributed Optimization Algorithms Based on Gradient Tracking , 2019, ArXiv.

[32]  Qingshan Liu,et al.  A Multi-Agent System With a Proportional-Integral Protocol for Distributed Constrained Optimization , 2017, IEEE Transactions on Automatic Control.

[33]  Thinh T. Doan,et al.  Convergence of the Iterates in Mirror Descent Methods , 2018, IEEE Control Systems Letters.

[34]  Arkadi Nemirovski,et al.  The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography , 2001, SIAM J. Optim..

[35]  Na Li,et al.  Harnessing smoothness to accelerate distributed optimization , 2016, 2016 IEEE 55th Conference on Decision and Control (CDC).

[36]  Yiguang Hong,et al.  Distributed Continuous-Time Algorithm for Constrained Convex Optimizations via Nonsmooth Analysis Approach , 2015, IEEE Transactions on Automatic Control.

[37]  Marc Teboulle,et al.  Mirror descent and nonlinear projected subgradient methods for convex optimization , 2003, Oper. Res. Lett..

[38]  Daniel W. C. Ho,et al.  Optimal distributed stochastic mirror descent for strongly convex optimization , 2016, Autom..