On the convergence and mode collapse of GAN
暂无分享,去创建一个
Jun Yu | Zhaoyu Zhang | Mengyan Li | Zhaoyu Zhang | Jun Yu | Mengyan Li
[1] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[2] Andrew Gordon Wilson,et al. Bayesian GAN , 2017, NIPS.
[3] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[4] Jacob Abernethy,et al. On Convergence and Stability of GANs , 2018 .
[5] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[6] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[7] Trung Le,et al. Dual Discriminator Generative Adversarial Nets , 2017, NIPS.
[8] Yoshua Bengio,et al. Improving Generative Adversarial Networks with Denoising Feature Matching , 2016, ICLR.
[9] Kenta Oono,et al. Chainer : a Next-Generation Open Source Framework for Deep Learning , 2015 .
[10] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[11] Sepp Hochreiter,et al. Self-Normalizing Neural Networks , 2017, NIPS.
[12] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[13] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[14] Marc G. Bellemare,et al. The Cramer Distance as a Solution to Biased Wasserstein Gradients , 2017, ArXiv.
[15] David Berthelot,et al. BEGAN: Boundary Equilibrium Generative Adversarial Networks , 2017, ArXiv.