Convergence Rate Bounds for the Mirror Descent Method: IQCs, Popov Criterion and Bregman Divergence

This paper presents a comprehensive convergence analysis for the mirror descent (MD) method, a widely used algorithm in convex optimization. The key feature of this algorithm is that it provides a generalization of classical gradient-based methods via the use of generalized distance-like functions, which are formulated using the Bregman divergence. Establishing convergence rate bounds for this algorithm is in general a non-trivial problem due to the lack of monotonicity properties in the composite nonlinearities involved. In this paper we show that the Bregman divergence from the optimal solution, which is commonly used as a Lyapunov function for this algorithm, is a special case of Lyapunov functions that follow when the Popov criterion is applied to an appropriate reformulation of the problem. This is then used as a basis to construct an integral quadratic constraint (IQC) framework through which convergence rate bounds with reduced conservatism can be deduced. We also illustrate via examples that the convergence rate bounds derived can be tight.

[1]  Khaled Laib,et al.  Convergence Rate Bounds for the Mirror Descent Method: IQCs and the Bregman Divergence , 2022, 2022 IEEE 61st Conference on Decision and Control (CDC).

[2]  G. Pavliotis,et al.  Stochastic Mirror Descent for Convex Optimization with Consensus Constraints , 2022, 2201.08642.

[3]  Shahin Shahrampour,et al.  On Centralized and Distributed Mirror Descent: Convergence Analysis Using Quadratic Constraints , 2021, IEEE Transactions on Automatic Control.

[4]  Tao Liu,et al.  Distributed Optimization With Event-Triggered Communication via Input Feedforward Passivity , 2020, IEEE Control Systems Letters.

[5]  A. Juditsky,et al.  Unifying mirror descent and dual averaging , 2019, Mathematical Programming.

[6]  Gabriela Hug,et al.  Timescale Separation in Autonomous Optimization , 2019, IEEE Transactions on Automatic Control.

[7]  Graziano Chesi,et al.  Input-Feedforward-Passivity-Based Distributed Optimization Over Jointly Connected Balanced Digraphs , 2019, IEEE Transactions on Automatic Control.

[8]  Chuan-Sheng Foo,et al.  Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile , 2018, ICLR.

[9]  John W. Simpson-Porco,et al.  A Hill-Moylan Lemma for Equilibrium-Independent Dissipativity , 2018, 2018 Annual American Control Conference (ACC).

[10]  Thinh T. Doan,et al.  Convergence of the Iterates in Mirror Descent Methods , 2018, IEEE Control Systems Letters.

[11]  Ioannis Lestas,et al.  Secant and Popov-like conditions in power network stability , 2018, Autom..

[12]  Benjamin Recht,et al.  Exponential Stability Analysis via Integral Quadratic Constraints , 2017, ArXiv.

[13]  Daniel W. C. Ho,et al.  Optimal distributed stochastic mirror descent for strongly convex optimization , 2016, Autom..

[14]  Mihailo R. Jovanovic,et al.  The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization , 2016, IEEE Transactions on Automatic Control.

[15]  Andre Wibisono,et al.  A variational perspective on accelerated methods in optimization , 2016, Proceedings of the National Academy of Sciences.

[16]  Peter Seiler,et al.  Exponential Decay Rate Conditions for Uncertain Linear Systems Using Integral Quadratic Constraints , 2016, IEEE Transactions on Automatic Control.

[17]  Alexandre M. Bayen,et al.  Accelerated Mirror Descent in Continuous and Discrete Time , 2015, NIPS.

[18]  Nima Monshizadeh,et al.  Bregman Storage Functions for Microgrid Control , 2015, IEEE Transactions on Automatic Control.

[19]  Matthew C. Turner,et al.  Zames-Falb multipliers for absolute stability: From O'Shea's contribution to convex searches , 2015, 2015 European Control Conference (ECC).

[20]  Sébastien Bubeck,et al.  Convex Optimization: Algorithms and Complexity , 2014, Found. Trends Mach. Learn..

[21]  Sayan Mukherjee,et al.  The Information Geometry of Mirror Descent , 2013, IEEE Transactions on Information Theory.

[22]  Angelia Nedic,et al.  On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging , 2013, SIAM J. Optim..

[23]  Alexander Lanzon,et al.  Equivalence between classes of multipliers for slope-restricted nonlinearities , 2012, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).

[24]  Manfred Morari,et al.  Towards computational complexity certification for constrained MPC based on Lagrange Relaxation and the fast gradient method , 2011, IEEE Conference on Decision and Control and European Control Conference.

[25]  Michael I. Jordan,et al.  Ergodic mirror descent , 2011, 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[26]  Romeo Ortega,et al.  Passivity of Nonlinear Incremental Systems: Application to PI Stabilization of Nonlinear RLC Circuits , 2006, Proceedings of the 45th IEEE Conference on Decision and Control.

[27]  Marc Teboulle,et al.  Mirror descent and nonlinear projected subgradient methods for convex optimization , 2003, Oper. Res. Lett..

[28]  M. Safonov,et al.  All multipliers for repeated monotone nonlinearities , 2002, Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148).

[29]  Alexandre Megretski,et al.  New results for analysis of systems with repeated nonlinearities , 2001, Autom..

[30]  M. Safonov,et al.  Zames-Falb multipliers for MIMO nonlinearities , 2000, Proceedings of the 2000 American Control Conference. ACC (IEEE Cat. No.00CH36334).

[31]  U. Jönsson Stability analysis with Popov multipliers and integral quadratic constraints , 1997 .

[32]  A. Rantzer On the Kalman-Yakubovich-Popov lemma , 1996 .

[33]  A. Rantzer,et al.  System analysis via integral quadratic constraints , 1994, Proceedings of 1994 33rd IEEE Conference on Decision and Control.

[34]  B. Anderson,et al.  A generalization of the Popov criterion , 1968 .

[35]  P. Olver Nonlinear Systems , 2013 .

[36]  张静,et al.  控制理论中的频率定理:Kalman—Yakubovich引理 , 2002 .

[37]  U. Jonsson Lecture Notes on Integral Quadratic Constraints , 2000 .

[38]  John Darzentas,et al.  Problem Complexity and Method Efficiency in Optimization , 1983 .

[39]  L. Bregman The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming , 1967 .

[40]  E. I. Jury,et al.  On the stability of a certain class of nonlinear sampled-data systems , 1964 .