A First-order Method for Monotone Stochastic Variational Inequalities on Semidefinite Matrix Spaces

Motivated by multi-user optimization problems and non-cooperative Nash games in stochastic regimes, we consider stochastic variational inequality (SVI) problems on matrix spaces where the variables are positive semidefinite matrices and the mapping is merely monotone. Much of the interest in the theory of variational inequality (VI) has focused on addressing VIs on vector spaces. Yet, most existing methods addressing VIs on matrix spaces either rely on strong assumptions, or require a two-loop framework where at each iteration, a projection problem, i.e., a semidefinite optimization problem needs to be solved. Motivated by this gap, we develop a stochastic mirror descent method where we choose the distance generating function to be defined as the quantum entropy. This method is a single-loop first-order method in the sense that it only requires a gradient-type of update at each iteration. The novelty of this work lies in the convergence analysis that is carried out through employing an auxiliary sequence of stochastic matrices. Our contribution is three-fold: (i) under this setting and employing averaging techniques, we show that the iterate generated by the algorithm converges to a weak solution of the SVI; (ii) moreover, we derive a convergence rate in terms of the expected value of a suitably defined gap function; (iii) we implement the developed method for solving a multiple-input multiple-output multi-cell cellular wireless network composed of seven hexagonal cells and present the numerical experiments supporting the convergence of the proposed method.

[1]  Boris Polyak,et al.  Acceleration of stochastic approximation by averaging , 1992 .

[2]  G. M. Korpelevich The extragradient method for finding saddle points and other problems , 1976 .

[3]  William H. Sandholm,et al.  Learning in Games via Reinforcement and Regularization , 2014, Math. Oper. Res..

[4]  Cuong V Nguyen,et al.  Accelerated Stochastic Mirror Descent Algorithms For Composite Non-strongly Convex Optimization , 2016 .

[5]  Aris L. Moustakas,et al.  Learning in an Uncertain World: MIMO Covariance Matrix Optimization With Imperfect Feedback , 2016, IEEE Transactions on Signal Processing.

[6]  Zhaosong Lu,et al.  Adaptive First-Order Methods for General Sparse Inverse Covariance Selection , 2009, SIAM J. Matrix Anal. Appl..

[7]  Ion Necoara,et al.  Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming , 2015, Optim. Methods Softw..

[8]  F. Schweppe,et al.  GRADIENT MATRICES AND MATRIX CALCULATIONS , 1965 .

[9]  A. Nedić,et al.  On Stochastic Mirror-prox Algorithms for Stochastic Cartesian Variational Inequalities: Randomized Block Coordinate and Optimal Averaging Schemes , 2016, Set-Valued and Variational Analysis.

[10]  E. Carlen TRACE INEQUALITIES AND QUANTUM ENTROPY: An introductory course , 2009 .

[11]  Asuman E. Ozdaglar,et al.  On the Convergence Rate of Incremental Aggregated Gradient Algorithms , 2015, SIAM J. Optim..

[12]  Houyuan Jiang,et al.  Stochastic Approximation Approaches to the Stochastic Variational Inequality Problem , 2008, IEEE Transactions on Automatic Control.

[13]  V. Vedral The role of relative entropy in quantum information theory , 2001, quant-ph/0102094.

[14]  Yunmei Chen,et al.  Accelerated schemes for a class of variational inequalities , 2014, Mathematical Programming.

[15]  A. Juditsky,et al.  Solving variational inequalities with Stochastic Mirror-Prox algorithm , 2008, 0809.0815.

[16]  Aris L. Moustakas,et al.  Matrix exponential learning: Distributed optimization in MIMO systems , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.

[17]  Emre Telatar,et al.  Capacity of Multi-antenna Gaussian Channels , 1999, Eur. Trans. Telecommun..

[18]  Angelia Nedic,et al.  Regularized Iterative Stochastic Approximation Methods for Stochastic Variational Inequality Problems , 2013, IEEE Transactions on Automatic Control.

[19]  Tamio Shimizu,et al.  A Stochastic Approximation Method for Optimization Problems , 1969, Journal of the ACM.

[20]  Stephen P. Boyd,et al.  A rank minimization heuristic with application to minimum order system approximation , 2001, Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148).

[21]  M. J. Gans,et al.  On Limits of Wireless Communications in a Fading Environment when Using Multiple Antennas , 1998, Wirel. Pers. Commun..

[22]  Francisco Facchinei,et al.  Convex Optimization, Game Theory, and Variational Inequality Theory , 2010, IEEE Signal Processing Magazine.

[23]  Pradeep Ravikumar,et al.  BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables , 2013, NIPS.

[24]  Sergio Barbarossa,et al.  IEEE TRANSACTIONS ON SIGNAL PROCESSING (ACCEPTED) 1 The MIMO Iterative Waterfilling Algorithm , 2022 .

[25]  Luca Sanguinetti,et al.  Distributed Stochastic Optimization via Matrix Exponential Learning , 2016, IEEE Transactions on Signal Processing.

[26]  Alexander Shapiro,et al.  Stochastic Approximation approach to Stochastic Programming , 2013 .

[27]  Farzad Yousefian,et al.  Optimal stochastic mirror descent methods for smooth, nonsmooth, and high-dimensional stochastic optimization , 2017 .

[28]  Angelia Nedic,et al.  On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems , 2017, Math. Program..

[29]  Guanghui Lan,et al.  On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators , 2013, Comput. Optim. Appl..

[30]  Tong Zhang,et al.  Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.

[31]  S. Kakade,et al.  On the duality of strong convexity and strong smoothness : Learning applications and matrix regularization , 2009 .

[32]  Yao-Liang Yu The Strong Convexity of von Neumann’s Entropy , 2015 .

[33]  Dimitri P. Bertsekas,et al.  Convex Optimization Theory , 2009 .

[34]  Hao Yu,et al.  Dynamic power allocation in MIMO fading systems without channel distribution information , 2015, IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications.

[35]  Francis Bach,et al.  SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.

[36]  F. Facchinei,et al.  Finite-Dimensional Variational Inequalities and Complementarity Problems , 2003 .

[37]  Gunnar Rätsch,et al.  Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection , 2004, J. Mach. Learn. Res..

[38]  F. Hiai,et al.  Introduction to Matrix Analysis and Applications , 2014 .

[39]  Guanghui Lan,et al.  iteration-complexity for cone programming , 2008 .