Lipschitz Generative Adversarial Nets
暂无分享,去创建一个
Lantao Yu | Zhihua Zhang | Yong Yu | Weinan Zhang | Hongwei Wang | Zhiming Zhou | Jiadong Liang | Yuxuan Song | Weinan Zhang | Yong Yu | Hongwei Wang | Zhiming Zhou | Zhihua Zhang | Lantao Yu | Yuxuan Song | Jiadong Liang
[1] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[2] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[3] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[4] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[5] David Tse,et al. A Convex Duality Framework for GANs , 2018, NeurIPS.
[6] Colin Raffel,et al. Is Generator Conditioning Causally Related to GAN Performance? , 2018, ICML.
[7] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[8] Guo-Jun Qi,et al. Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities , 2017, International Journal of Computer Vision.
[9] Jacob Abernethy,et al. On Convergence and Stability of GANs , 2018 .
[10] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[11] Sepp Hochreiter,et al. Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields , 2017, ICLR.
[12] Kamalika Chaudhuri,et al. Approximation and Convergence Properties of Generative Adversarial Learning , 2017, NIPS.
[13] Denis Lukovnikov,et al. On the regularization of Wasserstein GANs , 2017, ICLR.
[14] Nicolas Courty,et al. Large Scale Optimal Transport and Mapping Estimation , 2017, ICLR.
[15] Marc G. Bellemare,et al. The Cramer Distance as a Solution to Biased Wasserstein Gradients , 2017, ArXiv.
[16] Raymond Y. K. Lau,et al. Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[17] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[18] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[19] Yi Zhang,et al. Do GANs actually learn the distribution? An empirical study , 2017, ArXiv.
[20] Jonas Adler,et al. Banach Wasserstein GAN , 2018, NeurIPS.
[21] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[22] Zheng Xu,et al. Stabilizing Adversarial Nets With Prediction Methods , 2017, ICLR.
[23] Mario Lucic,et al. Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.
[24] Yingyu Liang,et al. Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.
[25] Sebastian Nowozin,et al. The Numerics of GANs , 2017, NIPS.
[26] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[27] Andrew M. Dai,et al. Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step , 2017, ICLR.
[28] Yoshua Bengio,et al. Mode Regularized Generative Adversarial Networks , 2016, ICLR.
[29] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[30] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[31] Jason D. Lee,et al. Solving Approximate Wasserstein GANs to Stationarity , 2018, ArXiv.
[32] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.