A Fast Optimistic Method for Monotone Variational Inequalities
暂无分享,去创建一个
[1] Yang Cai,et al. Accelerated Single-Call Methods for Constrained Min-Max Optimization , 2022, ICLR.
[2] Dang-Khoa Nguyen,et al. Fast Krasnosel'skii-Mann algorithm with a convergence rate of the fixed point iteration of $o\left(\frac{1}{k}\right)$ , 2022, 2206.09462.
[3] Yang Cai,et al. Accelerated Algorithms for Monotone Inclusions and Constrained Nonconvex-Nonconcave Min-Max Optimization , 2022, ArXiv.
[4] Yang Cai,et al. Tight Last-Iterate Convergence of the Extragradient and the Optimistic Gradient Descent-Ascent Algorithm for Constrained Monotone Variational Inequalities , 2022, 2204.09228.
[5] E. R. Csetnek,et al. Fast OGDA in continuous and discrete time , 2022, 2203.10947.
[6] Q. Tran-Dinh,et al. The Connection Between Nesterov's Accelerated Methods and Halpern Fixed-Point Iterations , 2022, 2203.04869.
[7] Michael I. Jordan,et al. Last-Iterate Convergence of Saddle Point Optimizers via High-Resolution Differential Equations , 2021, ArXiv.
[8] Quoc Tran-Dinh,et al. Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions , 2021, 2110.08150.
[9] Eduard A. Gorbunov,et al. Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity , 2021, AISTATS.
[10] TaeHo Yoon,et al. Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm , 2021, ICML.
[11] Noah Golowich,et al. Tight last-iterate convergence rates for no-regret learning in multi-player games , 2020, NeurIPS.
[12] Tatjana Chavdarova,et al. Taming GANs with Lookahead-Minmax , 2020, ICLR.
[13] E. R. Csetnek,et al. Two Steps at a Time - Taking GAN Training in Stride with Tseng's Method , 2020, SIAM J. Math. Data Sci..
[14] Noah Golowich,et al. Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems , 2020, COLT.
[15] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[16] J. Malick,et al. On the convergence of single-call stochastic extra-gradient methods , 2019, NeurIPS.
[17] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[18] Matthew K. Tam,et al. A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity , 2018, SIAM J. Optim..
[19] Yangyang Xu,et al. Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems , 2018, Math. Program..
[20] Constantinos Daskalakis,et al. The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization , 2018, NeurIPS.
[21] Gauthier Gidel,et al. A Variational Inequality Perspective on Generative Adversarial Networks , 2018, ICLR.
[22] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[23] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[24] Constantinos Daskalakis,et al. Training GANs with Optimism , 2017, ICLR.
[25] Christos H. Papadimitriou,et al. Cycles in adversarial regularized learning , 2017, SODA.
[26] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[27] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[28] Sebastian Nowozin,et al. The Numerics of GANs , 2017, NIPS.
[29] Jonathan P. How,et al. Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability , 2017, ICML.
[30] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[31] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[32] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Radu Ioan Bot,et al. A forward-backward-forward differential equation and its asymptotic properties , 2015, 1503.07728.
[34] Yu. V. Malitsky,et al. Projected Reflected Gradient Methods for Monotone Variational Inequalities , 2015, SIAM J. Optim..
[35] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[36] Aaron C. Courville,et al. Generative Adversarial Networks , 2014, 1406.2661.
[37] Heinz H. Bauschke,et al. Convex Analysis and Monotone Operator Theory in Hilbert Spaces , 2011, CMS Books in Mathematics.
[38] Yurii Nesterov,et al. Dual extrapolation and its applications to solving variational inequalities and related problems , 2003, Math. Program..
[39] L. Popov. A modification of the Arrow-Hurwicz method for search of saddle points , 1980 .
[40] B. Halpern. Fixed points of nonexpanding maps , 1967 .
[41] Z. Opial. Weak convergence of the sequence of successive approximations for nonexpansive mappings , 1967 .
[42] E. Rowland. Theory of Games and Economic Behavior , 1946, Nature.
[43] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[44] Arkadi Nemirovski,et al. Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems , 2004, SIAM J. Optim..
[45] F. Facchinei,et al. Finite-Dimensional Variational Inequalities and Complementarity Problems , 2003 .
[46] Paul Tseng,et al. A Modified Forward-backward Splitting Method for Maximal Monotone Mappings 1 , 1998 .
[47] G. M. Korpelevich. The extragradient method for finding saddle points and other problems , 1976 .