Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods
暂无分享,去创建一个
Jason D. Lee | Meisam Razaviyayn | Maziar Sanjabi | Maher Nouiehed | Tianjian Huang | J. Lee | Maziar Sanjabi | Maher Nouiehed | Meisam Razaviyayn | Tianjian Huang
[1] Mihai Anitescu,et al. Degenerate Nonlinear Programming with a Quadratic Growth Condition , 1999, SIAM J. Optim..
[2] Le Song,et al. Kernel Exponential Family Estimation via Doubly Dual Embedding , 2018, AISTATS.
[3] Arkadi Nemirovski,et al. Solving variational inequalities with monotone operators on domains given by Linear Minimization Oracles , 2013, Math. Program..
[4] Le Song,et al. SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation , 2017, ICML.
[5] F. Facchinei,et al. Finite-Dimensional Variational Inequalities and Complementarity Problems , 2003 .
[6] Thore Graepel,et al. Differentiable Game Mechanics , 2019, J. Mach. Learn. Res..
[7] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[8] Jason D. Lee,et al. On the Convergence and Robustness of Training GANs with Regularized Optimal Transport , 2018, NeurIPS.
[9] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[10] Michael I. Jordan,et al. What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? , 2019, ICML.
[11] Gauthier Gidel,et al. A Variational Inequality Perspective on Generative Adversarial Networks , 2018, ICLR.
[12] Guanghui Lan,et al. On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators , 2013, Comput. Optim. Appl..
[13] Jong-Shi Pang,et al. A unified distributed algorithm for non-cooperative games , 2016, Big Data over Networks.
[14] Constantinos Daskalakis,et al. Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization , 2018, ITCS.
[15] O. SIAMJ.,et al. PROX-METHOD WITH RATE OF CONVERGENCE O(1/t) FOR VARIATIONAL INEQUALITIES WITH LIPSCHITZ CONTINUOUS MONOTONE OPERATORS AND SMOOTH CONVEX-CONCAVE SADDLE POINT PROBLEMS∗ , 2004 .
[16] Sham M. Kakade,et al. Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator , 2018, ICML.
[17] P. Bernhard,et al. On a theorem of Danskin with an application to a theorem of Von Neumann-Sion , 1995 .
[18] Songtao Lu,et al. Block Alternating Optimization for Non-convex Min-max Problems: Algorithms and Applications in Signal Processing and Communications , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[19] Aryan Mokhtari,et al. A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach , 2019, AISTATS.
[20] Chuan-Sheng Foo,et al. Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile , 2018, ICLR.
[21] Liwei Wang,et al. Gradient Descent Finds Global Minima of Deep Neural Networks , 2018, ICML.
[22] Jong-Shi Pang,et al. Nonconvex Games with Side Constraints , 2011, SIAM J. Optim..
[23] Nicholas I. M. Gould,et al. Trust Region Methods , 2000, MOS-SIAM Series on Optimization.
[24] Antonin Chambolle,et al. On the ergodic convergence rates of a first-order primal–dual algorithm , 2016, Math. Program..
[25] Vijay V. Vazirani,et al. Cycles in Zero-Sum Differential Games and Biological Diversity , 2018, EC.
[26] Stefano Ermon,et al. Generative Adversarial Imitation Learning , 2016, NIPS.
[27] Dimitri P. Bertsekas,et al. Nonlinear Programming , 1997 .
[28] N. S. Aybat,et al. Iteration Complexity of Randomized Primal-Dual Methods for Convex-Concave Saddle Point Problems , 2018 .
[29] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[30] Lu Zhang,et al. FairGAN: Fairness-aware Generative Adversarial Networks , 2018, 2018 IEEE International Conference on Big Data (Big Data).
[31] Tengyuan Liang,et al. Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks , 2018, AISTATS.
[32] Thore Graepel,et al. The Mechanics of n-Player Differentiable Games , 2018, ICML.
[33] Tony Jebara,et al. Frank-Wolfe Algorithms for Saddle Point Problems , 2016, AISTATS.
[34] Mehryar Mohri,et al. Agnostic Federated Learning , 2019, ICML.
[35] Yongxin Chen,et al. On the Global Convergence of Imitation Learning: A Case for Linear Quadratic Regulator , 2019, ArXiv.
[36] Mark S. Squillante,et al. Efficient Stochastic Gradient Descent for Distributionally Robust Learning , 2018, ArXiv.
[37] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[38] John Wright,et al. A Geometric Analysis of Phase Retrieval , 2016, International Symposium on Information Theory.
[39] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[40] Amos J. Storkey,et al. Censoring Representations with an Adversary , 2015, ICLR.
[41] Mingrui Liu,et al. Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Successive Strongly Monotone Variational Inequalities , 2018 .
[42] Renato D. C. Monteiro,et al. On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean , 2010, SIAM J. Optim..
[43] Yair Carmon,et al. Lower bounds for finding stationary points I , 2017, Mathematical Programming.
[44] Michael I. Jordan,et al. Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal , 2019, ArXiv.
[45] Ioannis Mitliagkas,et al. Negative Momentum for Improved Game Dynamics , 2018, AISTATS.
[46] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[47] Toniann Pitassi,et al. Learning Adversarially Fair and Transferable Representations , 2018, ICML.
[48] Yurii Nesterov,et al. Dual extrapolation and its applications to solving variational inequalities and related problems , 2003, Math. Program..
[49] Constantinos Daskalakis,et al. Training GANs with Optimism , 2017, ICLR.
[50] Constantinos Daskalakis,et al. The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization , 2018, NeurIPS.
[51] Mingrui Liu,et al. Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Weakly-Monotone Variational Inequality , 2018 .
[52] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[53] Mark W. Schmidt,et al. Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition , 2016, ECML/PKDD.
[54] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[55] Kush R. Varshney,et al. Fairness GAN , 2018, IBM J. Res. Dev..
[56] Marc Teboulle,et al. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..
[57] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[58] Mingrui Liu,et al. Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning , 2018, ArXiv.
[59] Yongxin Chen,et al. Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications , 2019, IEEE Transactions on Signal Processing.