Complexity Lower Bounds for Nonconvex-Strongly-Concave Min-Max Optimization

We provide a first-order oracle complexity lower bound for finding stationary points of min-max optimization problems where the objective function is smooth, nonconvex in the minimization variable, and strongly concave in the maximization variable. We establish a lower bound of Ω (√ κ −2 ) for deterministic oracles, where defines the level of approximate stationarity and κ is the condition number. Our lower bound matches the best existing upper bound in the and κ dependence up to logarithmic factors. For stochastic oracles, we provide a lower bound of Ω (√ κ −2 + κ −4 ) . It suggests that there is a gap between the best existing upper boundO(κ −4) and our lower bound in the condition number dependence.

[1]  Michael I. Jordan,et al.  Near-Optimal Algorithms for Minimax Optimization , 2020, COLT.

[2]  Noah Golowich,et al.  Independent Policy Gradient Methods for Competitive Reinforcement Learning , 2021, NeurIPS.

[3]  Yair Carmon,et al.  Lower bounds for finding stationary points II: first-order methods , 2017, Mathematical Programming.

[4]  Antonin Chambolle,et al.  A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging , 2011, Journal of Mathematical Imaging and Vision.

[5]  Arkadi S. Nemirovsky,et al.  Information-based complexity of linear operator equations , 1992, J. Complex..

[6]  J. Neumann Zur Theorie der Gesellschaftsspiele , 1928 .

[7]  Prateek Jain,et al.  Efficient Algorithms for Smooth Minimax Optimization , 2019, NeurIPS.

[8]  Yurii Nesterov,et al.  Smooth minimization of non-smooth functions , 2005, Math. Program..

[9]  Haipeng Luo,et al.  Last-iterate Convergence of Decentralized Optimistic Gradient Descent/Ascent in Infinite-horizon Competitive Markov Games , 2021, COLT.

[10]  Yair Carmon,et al.  Lower bounds for finding stationary points I , 2017, Mathematical Programming.

[11]  Chuan-Sheng Foo,et al.  Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile , 2018, ICLR.

[12]  Michael I. Jordan,et al.  Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization , 2020, AISTATS.

[13]  Ioannis Mitliagkas,et al.  Linear Lower Bounds and Conditioning of Differentiable Games , 2019, ICML.

[14]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[15]  Constantinos Daskalakis,et al.  Training GANs with Optimism , 2017, ICLR.

[16]  Panayotis Mertikopoulos,et al.  On the convergence of single-call stochastic extra-gradient methods , 2019, NeurIPS.

[17]  Meisam Razaviyayn,et al.  Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems , 2021, SIAM J. Optim..

[18]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[19]  Tong Zhang,et al.  SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator , 2018, NeurIPS.

[20]  Guodong Zhang,et al.  On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach , 2019, ICLR.

[21]  M. Sion On general minimax theorems , 1958 .

[22]  Noah Golowich,et al.  Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems , 2020, COLT.

[23]  Guangzeng Xie,et al.  Lower Complexity Bounds of Finite-Sum Optimization Problems: The Results and Construction , 2021, ArXiv.

[24]  Niao He,et al.  The Complexity of Nonconvex-Strongly-Concave Minimax Optimization , 2021, UAI.

[25]  Nisheeth K. Vishnoi,et al.  Greedy adversarial equilibrium: an efficient alternative to nonconvex-nonconcave min-max optimization , 2020, STOC.

[26]  Yongxin Chen,et al.  Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications , 2019, IEEE Transactions on Signal Processing.

[27]  Moawwad E. A. El-Mikkawy On the inverse of a general tridiagonal matrix , 2004, Appl. Math. Comput..

[28]  Nathan Srebro,et al.  Lower Bounds for Non-Convex Stochastic Optimization , 2019, ArXiv.

[29]  Haipeng Luo,et al.  Linear Last-iterate Convergence for Matrix Games and Stochastic Games , 2020, ArXiv.

[30]  ZhangHongchao,et al.  Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization , 2016 .

[31]  Michael I. Jordan,et al.  On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems , 2019, ICML.

[32]  Yangyang Xu,et al.  Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems , 2018, Math. Program..

[33]  Arkadi Nemirovski,et al.  Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems , 2004, SIAM J. Optim..

[34]  Jason D. Lee,et al.  Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods , 2019, NeurIPS.

[35]  Yurii Nesterov,et al.  Lectures on Convex Optimization , 2018 .

[36]  Michael I. Jordan,et al.  What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? , 2019, ICML.

[37]  G. M. Korpelevich The extragradient method for finding saddle points and other problems , 1976 .

[38]  Saeed Ghadimi,et al.  Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization , 2013, Mathematical Programming.

[39]  P. Tseng On linear convergence of iterative methods for the variational inequality problem , 1995 .

[40]  Ohad Shamir,et al.  On the Iteration Complexity of Oblivious First-Order Optimization Algorithms , 2016, ICML.

[41]  Gauthier Gidel,et al.  A Variational Inequality Perspective on Generative Adversarial Networks , 2018, ICLR.

[42]  Karthik Sridharan,et al.  Optimization, Learning, and Games with Predictable Sequences , 2013, NIPS.

[43]  Zheng Xu,et al.  Stabilizing Adversarial Nets With Prediction Methods , 2017, ICLR.

[44]  Niao He,et al.  Global Convergence and Variance-Reduced Optimization for a Class of Nonconvex-Nonconcave Minimax Problems , 2020, ArXiv.

[45]  Constantinos Daskalakis,et al.  The complexity of constrained min-max optimization , 2020, STOC.

[46]  A. Gasnikov,et al.  Accelerated methods for composite non-bilinear saddle point problem , 2019, 1906.03620.

[47]  John Darzentas,et al.  Problem Complexity and Method Efficiency in Optimization , 1983 .

[48]  Weiwei Kong,et al.  An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems , 2019, SIAM J. Optim..

[49]  Mingrui Liu,et al.  Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning , 2018, ArXiv.