A Solvable High-Dimensional Model of GAN
暂无分享,去创建一个
Hong Hu | Yue M. Lu | Chuang Wang | Chuang Wang | Hong Hu
[1] Lars M. Mescheder,et al. On the convergence properties of GAN training , 2018, ArXiv.
[2] S. Shankar Sastry,et al. On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum Games , 2019, 1901.00838.
[3] Alexei A. Efros,et al. Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Yonina C. Eldar,et al. Subspace Estimation From Incomplete Observations: A High-Dimensional Analysis , 2018, IEEE Journal of Selected Topics in Signal Processing.
[5] Michael Biehl,et al. Learning by on-line gradient descent , 1995 .
[6] Yue M. Lu,et al. The scaling limit of high-dimensional online independent component analysis , 2017, NIPS.
[7] Yingyu Liang,et al. Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.
[8] Andrea Montanari,et al. A mean field view of the landscape of two-layer neural networks , 2018, Proceedings of the National Academy of Sciences.
[9] Bernt Schiele,et al. Generative Adversarial Text to Image Synthesis , 2016, ICML.
[10] Sebastian Nowozin,et al. The Numerics of GANs , 2017, NIPS.
[11] Christian Ledig,et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[13] Jerry Li,et al. Towards Understanding the Dynamics of Generative Adversarial Networks , 2017, ArXiv.
[14] I. Johnstone,et al. On Consistency and Sparsity for Principal Components Analysis in High Dimensions , 2009, Journal of the American Statistical Association.
[15] Yue M. Lu,et al. Scaling Limit: Exact and Tractable Analysis of Online Learning Algorithms with Applications to Regularized Regression and PCA , 2017, ArXiv.
[16] Lucas Theis,et al. Amortised MAP Inference for Image Super-resolution , 2016, ICLR.
[17] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[18] H. Kushner,et al. Stochastic Approximation and Recursive Algorithms and Applications , 2003 .
[19] P. Billingsley,et al. Convergence of Probability Measures , 1970, The Mathematical Gazette.
[20] J. Zico Kolter,et al. Gradient descent GAN optimization is locally stable , 2017, NIPS.
[21] B. Jourdain,et al. Optimal scaling for the transient phase of Metropolis Hastings algorithms: The longtime behavior , 2012, 1212.5517.
[22] Mario Lucic,et al. Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.
[23] Fei Xia,et al. Understanding GANs: the LQG Setting , 2017, ArXiv.
[24] Yue M. Lu,et al. Online learning for sparse PCA in high dimensions: Exact dynamics and phase transitions , 2016, 2016 IEEE Information Theory Workshop (ITW).
[25] A. Gelman,et al. Weak convergence and optimal scaling of random walk Metropolis algorithms , 1997 .
[26] Phan-Minh Nguyen,et al. Mean Field Limit of the Learning Dynamics of Multilayer Neural Networks , 2019, ArXiv.
[27] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[28] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[29] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[30] Saad,et al. Exact solution for on-line learning in multilayer neural networks. , 1995, Physical review letters.
[31] Sebastian Nowozin,et al. Stabilizing Training of Generative Adversarial Networks through Regularization , 2017, NIPS.