暂无分享,去创建一个
[1] G. Golub,et al. Eigenvalue computation in the 20th century , 2000 .
[2] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[3] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[4] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[5] Simon Osindero,et al. Conditional Generative Adversarial Nets , 2014, ArXiv.
[6] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[7] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[8] Rob Fergus,et al. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.
[9] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[10] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[11] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[12] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[13] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[16] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[17] Tim Salimans,et al. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.
[18] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[19] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[20] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[21] Vincent Dumoulin,et al. Deconvolution and Checkerboard Artifacts , 2016 .
[22] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[23] Andrew Brock,et al. Neural Photo Editing with Introspective Adversarial Networks , 2016, ICLR.
[24] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[25] Dustin Tran,et al. Hierarchical Implicit Models and Likelihood-Free Variational Inference , 2017, NIPS.
[26] Lucas Theis,et al. Amortised MAP Inference for Image Super-resolution , 2016, ICLR.
[27] Jae Hyun Lim,et al. Geometric GAN , 2017, ArXiv.
[28] Marco Marchesi,et al. Megapixel Size Image Creation using Generative Adversarial Networks , 2017, ArXiv.
[29] Raymond Y. K. Lau,et al. Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[30] Ruslan Salakhutdinov,et al. On the Quantitative Analysis of Decoder-Based Generative Models , 2016, ICLR.
[31] Chen Sun,et al. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[32] Jonathon Shlens,et al. A Learned Representation For Artistic Style , 2016, ICLR.
[33] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[34] Marc G. Bellemare,et al. The Cramer Distance as a Solution to Biased Wasserstein Gradients , 2017, ArXiv.
[35] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[36] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[37] Hugo Larochelle,et al. Modulating early visual processing by language , 2017, NIPS.
[38] Jacob Abernethy,et al. On Convergence and Stability of GANs , 2018 .
[39] Andrew M. Dai,et al. Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step , 2017, ICLR.
[40] Marco Wiering,et al. Comparing Generative Adversarial Network Techniques for Image Creation and Modification , 2018, ArXiv.
[41] Han Zhang,et al. Improving GANs Using Optimal Transport , 2018, ICLR.
[42] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[43] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[44] Aaron C. Courville,et al. FiLM: Visual Reasoning with a General Conditioning Layer , 2017, AAAI.
[45] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[46] Abhinav Gupta,et al. Non-local Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[47] Arthur Gretton,et al. Demystifying MMD GANs , 2018, ICLR.
[48] Takeru Miyato,et al. cGANs with Projection Discriminator , 2018, ICLR.
[49] Colin Raffel,et al. Is Generator Conditioning Causally Related to GAN Performance? , 2018, ICML.
[50] Rishi Sharma,et al. A Note on the Inception Score , 2018, ArXiv.
[51] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[52] Stefan Winkler,et al. The Unusual Effectiveness of Averaging in GAN Training , 2018, ICLR.