Understanding the Effectiveness of Lipschitz-Continuity in Generative Adversarial Nets
暂无分享,去创建一个
Lantao Yu | Weinan Zhang | Yong Yu | Zhiming Zhou | Yuxuan Song | Hongwei Wang | Jiadong Liang | Zhihua Zhang | Weinan Zhang | Yong Yu | Hongwei Wang | Lantao Yu | Zhiming Zhou | Jiadong Liang | Yuxuan Song | Zhihua Zhang
[1] Jason D. Lee,et al. Solving Approximate Wasserstein GANs to Stationarity , 2018, ArXiv.
[2] Denis Lukovnikov,et al. On the regularization of Wasserstein GANs , 2017, ICLR.
[3] Yoshua Bengio,et al. Mode Regularized Generative Adversarial Networks , 2016, ICLR.
[4] Yingyu Liang,et al. Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.
[5] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[6] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[7] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[8] Yoshua Bengio,et al. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[10] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[11] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[12] Jacob Abernethy,et al. On Convergence and Stability of GANs , 2018 .
[13] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[14] Mario Lucic,et al. Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.
[15] Andrew M. Dai,et al. Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step , 2017, ICLR.
[16] Zheng Xu,et al. Stabilizing Adversarial Nets With Prediction Methods , 2017, ICLR.
[17] Nicolas Courty,et al. Large Scale Optimal Transport and Mapping Estimation , 2017, ICLR.
[18] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[19] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[20] Yi Zhang,et al. Do GANs actually learn the distribution? An empirical study , 2017, ArXiv.
[21] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[22] Alexei A. Efros,et al. Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Sebastian Nowozin,et al. The Numerics of GANs , 2017, NIPS.
[24] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[25] Colin Raffel,et al. Is Generator Conditioning Causally Related to GAN Performance? , 2018, ICML.
[26] Andrew Brock,et al. Neural Photo Editing with Introspective Adversarial Networks , 2016, ICLR.
[27] Sepp Hochreiter,et al. Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields , 2017, ICLR.
[28] Raymond Y. K. Lau,et al. Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[29] Yoav Zemel. Optimal Transportation: Continuous and Discrete , 2012 .
[30] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[31] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.