暂无分享,去创建一个
Trung Le | Tu Dinh Nguyen | Dinh Q. Phung | Quan Hoang | Trung Le | T. Nguyen | Q. Hoang
[1] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[2] Yinda Zhang,et al. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop , 2015, ArXiv.
[3] Honglak Lee,et al. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.
[4] Aaron C. Courville,et al. Adversarially Learned Inference , 2016, ICLR.
[5] Yoshua Bengio,et al. Improving Generative Adversarial Networks with Denoising Feature Matching , 2016, ICLR.
[6] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[7] Sepp Hochreiter,et al. Self-Normalizing Neural Networks , 2017, NIPS.
[8] Joost van de Weijer,et al. Ensembles of Generative Adversarial Networks , 2016, ArXiv.
[9] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[11] Sridhar Mahadevan,et al. Generative Multi-Adversarial Networks , 2016, ICLR.
[12] Philip H. S. Torr,et al. Multi-agent Diverse Generative Adversarial Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[13] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[14] David Berthelot,et al. BEGAN: Boundary Equilibrium Generative Adversarial Networks , 2017, ArXiv.
[15] Ferenc Huszar,et al. How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? , 2015, ArXiv.
[16] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[17] Yoshua Bengio,et al. Maxout Networks , 2013, ICML.
[18] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[19] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[20] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[21] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[22] Trung Le,et al. Dual Discriminator Generative Adversarial Nets , 2017, NIPS.
[23] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[24] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[25] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[26] J. Neumann. Zur Theorie der Gesellschaftsspiele , 1928 .
[27] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[28] Yoshua Bengio,et al. What regularized auto-encoders learn from the data-generating distribution , 2012, J. Mach. Learn. Res..
[29] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[30] Bernhard Schölkopf,et al. AdaGAN: Boosting Generative Models , 2017, NIPS.
[31] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[32] Yann LeCun,et al. Energy-based Generative Adversarial Network , 2016, ICLR.
[33] Yingyu Liang,et al. Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.
[34] Yiannis Demiris,et al. MAGAN: Margin Adaptation for Generative Adversarial Networks , 2017, ArXiv.
[35] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).