Which Training Methods for GANs do actually Converge?
暂无分享,去创建一个
Sebastian Nowozin | Andreas Geiger | Lars M. Mescheder | S. Nowozin | Andreas Geiger | L. Mescheder | Sebastian Nowozin
[1] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[2] P. Olver. Nonlinear Systems , 2013 .
[3] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[4] Yinda Zhang,et al. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop , 2015, ArXiv.
[5] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[6] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[7] Yann LeCun,et al. Energy-based Generative Adversarial Network , 2016, ICLR.
[8] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[10] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[11] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[12] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[13] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[14] Yoshua Bengio,et al. Boundary-Seeking Generative Adversarial Networks , 2017, ICLR 2017.
[15] J. Zico Kolter,et al. Gradient descent GAN optimization is locally stable , 2017, NIPS.
[16] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[17] Lucas Theis,et al. Amortised MAP Inference for Image Super-resolution , 2016, ICLR.
[18] Sebastian Nowozin,et al. The Numerics of GANs , 2017, NIPS.
[19] Sebastian Nowozin,et al. Stabilizing Training of Generative Adversarial Networks through Regularization , 2017, NIPS.
[20] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[21] David Berthelot,et al. BEGAN: Boundary Equilibrium Generative Adversarial Networks , 2017, ArXiv.
[22] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[23] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[24] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[25] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[26] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[27] Rishi Sharma,et al. A Note on the Inception Score , 2018, ArXiv.
[28] A. Preliminaries. Training Methods for GANs do actually Converge ? , 2018 .
[29] Gauthier Gidel,et al. A Variational Inequality Perspective on Generative Adversarial Networks , 2018, ICLR.
[30] Stefan Winkler,et al. The Unusual Effectiveness of Averaging in GAN Training , 2018, ICLR.