Poincar\'e Recurrence, Cycles and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games
暂无分享,去创建一个
[1] I. Bendixson. Sur les courbes définies par des équations différentielles , 1901 .
[2] J. Yorke,et al. Period Three Implies Chaos , 1975 .
[3] M. Shub. Global Stability of Dynamical Systems , 1986 .
[4] William H. Sandholm,et al. Population Games And Evolutionary Dynamics , 2010, Economic learning and social evolution.
[5] Éva Tardos,et al. Beyond the Nash Equilibrium Barrier , 2011, ICS.
[6] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[7] Jeff S. Shamma,et al. Optimization Despite Chaos: Convex Relaxations to Complex Limit Sets via Poincaré Recurrence , 2014, SODA.
[8] David Pfau,et al. Connecting Generative Adversarial Networks and Actor-Critic Methods , 2016, ArXiv.
[9] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[10] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[11] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[12] Yingyu Liang,et al. Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.
[13] Alexei A. Efros,et al. Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Bernhard Schölkopf,et al. AdaGAN: Boosting Generative Models , 2017, NIPS.
[15] David Berthelot,et al. BEGAN: Boundary Equilibrium Generative Adversarial Networks , 2017, ArXiv.
[16] Georgios Piliouras,et al. Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos , 2017, NIPS.
[17] Christian Ledig,et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Dimitris N. Metaxas,et al. StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[19] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[20] Jacob Abernethy,et al. On Convergence and Stability of GANs , 2018 .
[21] Thore Graepel,et al. The Mechanics of n-Player Differentiable Games , 2018, ICML.
[22] Rahul Savani,et al. Beyond Local Nash Equilibria for Adversarial Networks , 2018, BNCAI.
[23] Constantinos Daskalakis,et al. Training GANs with Optimism , 2017, ICLR.
[24] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[25] Constantinos Daskalakis,et al. The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization , 2018, NeurIPS.
[26] Xu Chen,et al. Fictitious GAN: Training GANs with Historical Models , 2018, ECCV.
[27] Georgios Piliouras,et al. Three Body Problems in Evolutionary Game Dynamics: Convergence, Periodicity and Limit Cycles , 2018, AAMAS.
[28] Georgios Piliouras,et al. Multiplicative Weights Update in Zero-Sum Games , 2018, EC.
[29] Christos H. Papadimitriou,et al. Cycles in adversarial regularized learning , 2017, SODA.
[30] Mingrui Liu,et al. Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Weakly-Monotone Variational Inequality , 2018 .
[31] Leonard J. Schulman,et al. Learning Dynamics and the Co-Evolution of Competing Sexual Species , 2017, ITCS.
[32] Liwei Wang,et al. Gradient Descent Finds Global Minima of Deep Neural Networks , 2018, ICML.
[33] Georgios Piliouras,et al. Finite Regret and Cycles with Fixed Step-Size via Alternating Gradient Descent-Ascent , 2019, COLT.
[34] Chuan-Sheng Foo,et al. Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile , 2018, ICLR.
[35] Yun Kuen Cheung,et al. Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in Zero-Sum Games , 2019, COLT.
[36] Constantinos Daskalakis,et al. Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization , 2018, ITCS.
[37] Ioannis Mitliagkas,et al. Negative Momentum for Improved Game Dynamics , 2018, AISTATS.
[38] Georgios Piliouras,et al. Fast and Furious Learning in Zero-Sum Games: Vanishing Regret with Non-Vanishing Step Sizes , 2019, NeurIPS.
[39] Gauthier Gidel,et al. A Variational Inequality Perspective on Generative Adversarial Networks , 2018, ICLR.
[40] Michael I. Jordan,et al. Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal , 2019, ArXiv.
[41] Georgios Piliouras,et al. Multi-Agent Learning in Network Zero-Sum Games is a Hamiltonian System , 2019, AAMAS.
[42] Thomas Hofmann,et al. Local Saddle Point Optimization: A Curvature Exploitation Approach , 2018, AISTATS.
[43] Stefan Winkler,et al. The Unusual Effectiveness of Averaging in GAN Training , 2018, ICLR.
[44] S. Shankar Sastry,et al. On Gradient-Based Learning in Continuous Games , 2018, SIAM J. Math. Data Sci..
[45] Jacob Abernethy,et al. Last-iterate convergence rates for min-max optimization , 2019, ArXiv.