Increasing Iterate Averaging for Solving Saddle-Point Problems

Many problems in machine learning and game theory can be formulated as saddle-point problems, for which various first-order methods have been developed and proven efficient in practice. Under the general convex-concave assumption, most first-order methods only guarantee an ergodic convergence rate, that is, the uniform averages of the iterates converge at a $O(1/T)$ rate in terms of the saddle-point residual. However, numerically, the iterates themselves can often converge much faster than the uniform averages. This observation motivates increasing averaging schemes that put more weight on later iterates, in contrast to the usual uniform averaging. We show that such increasing averaging schemes, applied to various first-order methods, are able to preserve the $O(1/T)$ convergence rate with no additional assumptions or computational overhead. Extensive numerical experiments on zero-sum game solving, market equilibrium computation and image denoising demonstrate the effectiveness of the proposed schemes. In particular, the increasing averages consistently outperform the uniform averages in all test problems by orders of magnitude. When solving matrix and extensive-form games, increasing averages consistently outperform the last iterates as well. For matrix games, a first-order method equipped with increasing averaging outperforms the highly competitive CFR$^+$ algorithm.

[1]  A. Juditsky,et al.  5 First-Order Methods for Nonsmooth Convex Large-Scale Optimization , I : General Purpose Methods , 2010 .

[2]  Michael H. Bowling,et al.  Solving Heads-Up Limit Texas Hold'em , 2015, IJCAI.

[3]  Noam Brown,et al.  Superhuman AI for heads-up no-limit poker: Libratus beats top professionals , 2018, Science.

[4]  Tuomas Sandholm,et al.  Solving Large Sequential Games with the Excessive Gap Technique , 2018, NeurIPS.

[5]  Javier Peña,et al.  First-Order Algorithm with O(ln(1/e)) Convergence for e-Equilibrium in Two-Person Zero-Sum Games , 2008, AAAI.

[6]  Antonin Chambolle,et al.  A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging , 2011, Journal of Mathematical Imaging and Vision.

[7]  Arkadi Nemirovski,et al.  Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems , 2004, SIAM J. Optim..

[8]  Vijay V. Vazirani,et al.  Eisenberg-Gale markets: Algorithms and game-theoretic properties , 2010, Games Econ. Behav..

[9]  Marc Teboulle,et al.  Mirror descent and nonlinear projected subgradient methods for convex optimization , 2003, Oper. Res. Lett..

[10]  E. Eisenberg,et al.  CONSENSUS OF SUBJECTIVE PROBABILITIES: THE PARI-MUTUEL METHOD, , 1959 .

[11]  Javier Peña,et al.  Smoothing Techniques for Computing Nash Equilibria of Sequential Games , 2010, Math. Oper. Res..

[12]  Yurii Nesterov,et al.  Excessive Gap Technique in Nonsmooth Convex Minimization , 2005, SIAM J. Optim..

[13]  Tuomas Sandholm,et al.  Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions , 2019, NeurIPS.

[14]  Alexander Peysakhovich,et al.  Computing Large Market Equilibria using Abstractions , 2019, EC.

[15]  Neil Burch,et al.  Heads-up limit hold’em poker is solved , 2015, Science.

[16]  Kevin Waugh,et al.  Faster algorithms for extensive-form game solving via improved smoothing functions , 2018, Mathematical Programming.