Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity
暂无分享,去创建一个
[1] V. Cevher,et al. Stable Nonconvex-Nonconcave Training via Linear Interpolation , 2023, NeurIPS.
[2] Jelena Diakonikolas,et al. Variance Reduced Halpern Iteration for Finite-Sum Monotone Inclusions , 2023, ArXiv.
[3] Q. Tran-Dinh,et al. Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on Classical and Recent Developments , 2023, 2303.17192.
[4] Eduard A. Gorbunov,et al. Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions , 2023, NeurIPS.
[5] V. Cevher,et al. Escaping limit cycles: Global convergence for constrained nonconvex-nonconcave minimax problems , 2023, ICLR.
[6] V. Cevher,et al. Solving stochastic weak Minty variational inequalities without increasing batch size , 2023, ICLR.
[7] Quoc Tran-Dinh. Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems , 2023, 2301.03113.
[8] Yura Malitsky,et al. Beyond the Golden Ratio for Variational Inequality Algorithms , 2022, J. Mach. Learn. Res..
[9] Pratik Worah,et al. The landscape of the proximal point method for nonconvex–nonconcave minimax optimization , 2022, Mathematical Programming.
[10] Eduard A. Gorbunov,et al. Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity , 2022, ICML.
[11] Yang Cai,et al. Accelerated Single-Call Methods for Constrained Min-Max Optimization , 2022, ICLR.
[12] Luo Luo,et al. Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization , 2022, ArXiv.
[13] R. Cominetti,et al. Stochastic Fixed-Point Iterations for Nonexpansive Maps: Convergence and Error Bounds , 2022, SIAM J. Control. Optim..
[14] Ernest K. Ryu,et al. Accelerated Minimax Algorithms Flock Together , 2022, 2205.11093.
[15] K. Levy,et al. Adapting to Mixing Time in Stochastic Optimization with Markovian Data , 2022, International Conference on Machine Learning.
[16] A. Bohm,et al. Solving Nonconvex-Nonconcave Min-Max Problems exhibiting Weak Minty Solutions , 2022, Trans. Mach. Learn. Res..
[17] Haihao Lu,et al. On the Linear Convergence of Extragradient Methods for Nonconvex-Nonconcave Minimax Problems , 2022, INFORMS J. Optim..
[18] Ioannis Mitliagkas,et al. Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity , 2021, NeurIPS.
[19] Yair Carmon,et al. Stochastic Bias-Reduced Gradient Methods , 2021, NeurIPS.
[20] Sucheol Lee,et al. Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems , 2021, NeurIPS.
[21] Ulrich Kohlenbach,et al. On the proximal point algorithm and its Halpern-type variant for generalized monotone operators in Hilbert space , 2021, Optimization Letters.
[22] TaeHo Yoon,et al. Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm , 2021, ICML.
[23] Guanghui Lan. Policy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classes , 2021, Mathematical Programming.
[24] Noah Golowich,et al. Independent Policy Gradient Methods for Competitive Reinforcement Learning , 2021, NeurIPS.
[25] Guanghui Lan,et al. Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation , 2020, SIAM J. Optim..
[26] Michael I. Jordan,et al. Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization , 2020, AISTATS.
[27] John C. Duchi,et al. Large-Scale Methods for Distributionally Robust Optimization , 2020, NeurIPS.
[28] Constantinos Daskalakis,et al. The complexity of constrained min-max optimization , 2020, STOC.
[29] Felix Lieder,et al. On the convergence rate of the Halpern-iteration , 2020, Optim. Lett..
[30] E. R. Csetnek,et al. Two Steps at a Time - Taking GAN Training in Stride with Tseng's Method , 2020, SIAM J. Math. Data Sci..
[31] Ya-Ping Hsieh,et al. The limits of min-max optimization algorithms: convergence to spurious non-critical sets , 2020, ICML.
[32] Jelena Diakonikolas. Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion and Strong Solutions to Variational Inequalities , 2020, COLT.
[33] Laurentiu Leustean,et al. Quantitative results on a Halpern-type proximal point algorithm , 2020, Computational Optimization and Applications.
[34] Pontus Giselsson,et al. On compositions of special cases of Lipschitz continuous operators , 2019, Fixed Point Theory and Algorithms for Sciences and Engineering.
[35] John C. Duchi,et al. Lower bounds for non-convex stochastic optimization , 2019, Mathematical Programming.
[36] Hung M. Phan,et al. Conical averagedness and convergence analysis of fixed point algorithms , 2019, J. Glob. Optim..
[37] J. Malick,et al. On the convergence of single-call stochastic extra-gradient methods , 2019, NeurIPS.
[38] Donghwan Kim,et al. Accelerated proximal point method for maximally monotone operators , 2019, Mathematical Programming.
[39] Heinz H. Bauschke,et al. Generalized monotone operators and their averaged resolvents , 2019, Mathematical Programming.
[40] Minh N. Dao,et al. Adaptive Douglas-Rachford Splitting Algorithm for the Sum of Two Operators , 2018, SIAM J. Optim..
[41] Matthew K. Tam,et al. A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity , 2018, SIAM J. Optim..
[42] Zeyuan Allen-Zhu,et al. How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD , 2018, NeurIPS.
[43] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[44] Shimrit Shtern,et al. A First Order Method for Solving Convex Bilevel Optimization Problems , 2017, SIAM J. Optim..
[45] Peter W. Glynn,et al. Unbiased Monte Carlo for optimization and functions of expectations via multi-level randomization , 2015, 2015 Winter Simulation Conference (WSC).
[46] Guanghui Lan,et al. On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators , 2013, Comput. Optim. Appl..
[47] Heinz H. Bauschke,et al. Convex Analysis and Monotone Operator Theory in Hilbert Spaces , 2011, CMS Books in Mathematics.
[48] Michael B. Giles,et al. Multilevel Monte Carlo Path Simulation , 2008, Oper. Res..
[49] P. L. Combettes,et al. Generalized Mann iterates for constructing fixed points in Hilbert spaces , 2002 .
[50] C. W. Groetsch,et al. A Note on Segmenting Mann Iterates , 1972 .
[51] B. Halpern. Fixed points of nonexpanding maps , 1967 .
[52] W. R. Mann,et al. Mean value methods in iteration , 1953 .
[53] Niao He,et al. On the Bias-Variance-Cost Tradeoff of Stochastic Optimization , 2021, NeurIPS.
[54] Jinsung Yoon,et al. GENERATIVE ADVERSARIAL NETS , 2018 .
[55] Patrick L. Combettes,et al. Proximal Methods for Cohypomonotone Operators , 2004, SIAM J. Control. Optim..
[56] F. Facchinei,et al. Finite-Dimensional Variational Inequalities and Complementarity Problems , 2003 .
[57] P. L. Combettes,et al. Quasi-Fejérian Analysis of Some Optimization Algorithms , 2001 .
[58] Paul Tseng,et al. A Modified Forward-backward Splitting Method for Maximal Monotone Mappings 1 , 1998 .
[59] J. Neumann. A Model of General Economic Equilibrium , 1945 .