暂无分享,去创建一个
Yoshua Bengio | Kyunghyun Cho | R. Devon Hjelm | Tong Che | Athul Paul Jacob | Yoshua Bengio | Kyunghyun Cho | Tong Che
[1] Geoffrey E. Hinton,et al. Deep Boltzmann Machines , 2009, AISTATS.
[2] Andriy Mnih,et al. Variational Inference for Monte Carlo Objectives , 2016, ICML.
[3] Alan Ritter,et al. Adversarial Learning for Neural Dialogue Generation , 2017, EMNLP.
[4] Pieter Abbeel,et al. On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient , 2010, NIPS.
[5] Dustin Tran,et al. Hierarchical Implicit Models and Likelihood-Free Variational Inference , 2017, NIPS.
[6] Sanja Fidler,et al. Towards Diverse and Natural Image Descriptions via a Conditional GAN , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[7] Raymond Y. K. Lau,et al. Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[8] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[9] Aaron C. Courville,et al. Adversarially Learned Inference , 2016, ICLR.
[10] Sebastian Nowozin,et al. Stabilizing Training of Generative Adversarial Networks through Regularization , 2017, NIPS.
[11] Yoshua Bengio,et al. Blocks and Fuel: Frameworks for deep learning , 2015, ArXiv.
[12] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[13] Yinda Zhang,et al. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop , 2015, ArXiv.
[14] Geoffrey E. Hinton,et al. The Helmholtz Machine , 1995, Neural Computation.
[15] Yoshua Bengio,et al. Professor Forcing: A New Algorithm for Training Recurrent Networks , 2016, NIPS.
[16] Karol Gregor,et al. Neural Variational Inference and Learning in Belief Networks , 2014, ICML.
[17] Geoffrey E. Hinton,et al. Generating Text with Recurrent Neural Networks , 2011, ICML.
[18] Kenji Fukumizu,et al. On integral probability metrics, φ-divergences and binary classification , 2009, 0901.2698.
[19] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[20] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[21] Martin J. Wainwright,et al. Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization , 2008, IEEE Transactions on Information Theory.
[22] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[23] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[24] Michael I. Jordan,et al. Mean Field Theory for Sigmoid Belief Networks , 1996, J. Artif. Intell. Res..
[25] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[26] Dustin Tran,et al. Deep and Hierarchical Implicit Models , 2017, ArXiv.
[27] Ruslan Salakhutdinov,et al. On the Quantitative Analysis of Decoder-Based Generative Models , 2016, ICLR.
[28] Yoshua Bengio,et al. Maximum-Likelihood Augmented Discrete Generative Adversarial Networks , 2017, ArXiv.
[29] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[30] Vaibhava Goel,et al. McGan: Mean and Covariance Feature Matching GAN , 2017, ICML.
[31] Jascha Sohl-Dickstein,et al. REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models , 2017, NIPS.
[32] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[33] Yoshua Bengio,et al. Reweighted Wake-Sleep , 2014, ICLR.
[34] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[35] Vysoké Učení,et al. Statistical Language Models Based on Neural Networks , 2012 .
[36] E. Gumbel. Statistical Theory of Extreme Values and Some Practical Applications : A Series of Lectures , 1954 .
[37] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[38] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[39] Lantao Yu,et al. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.
[40] Shakir Mohamed,et al. Learning in Implicit Generative Models , 2016, ArXiv.
[41] Trevor Darrell,et al. Simultaneous Deep Transfer Across Domains and Tasks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[42] Alexander M. Rush,et al. Adversarially Regularized Autoencoders for Generating Discrete Structures , 2017, ArXiv.
[43] Sergey Levine,et al. MuProp: Unbiased Backpropagation for Stochastic Neural Networks , 2015, ICLR.
[44] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[45] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[46] Ruslan Salakhutdinov,et al. On the quantitative analysis of deep belief networks , 2008, ICML '08.
[47] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[48] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[49] Thorsten Brants,et al. One billion word benchmark for measuring progress in statistical language modeling , 2013, INTERSPEECH.
[50] Ferenc Huszár,et al. Variational Inference using Implicit Distributions , 2017, ArXiv.
[51] Bernhard Schölkopf,et al. AdaGAN: Boosting Generative Models , 2017, NIPS.
[52] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[53] Alexander J. Smola,et al. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy , 2016, ICLR.
[54] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[55] Gauthier Gidel,et al. Parametric Adversarial Divergences are Good Task Losses for Generative Modeling , 2017, ICLR.
[56] David Berthelot,et al. BEGAN: Boundary Equilibrium Generative Adversarial Networks , 2017, ArXiv.
[57] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[58] John Salvatier,et al. Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.