Learning Deep Generative Models With Discrete Latent Variables
暂无分享,去创建一个
[1] Radford M. Neal. Connectionist Learning of Belief Networks , 1992, Artif. Intell..
[2] Geoffrey E. Hinton,et al. The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.
[3] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[4] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[5] Ruslan Salakhutdinov,et al. On the quantitative analysis of deep belief networks , 2008, ICML '08.
[6] Tijmen Tieleman,et al. Training restricted Boltzmann machines using approximations to the likelihood gradient , 2008, ICML '08.
[7] Honglak Lee,et al. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations , 2009, ICML '09.
[8] Geoffrey E. Hinton,et al. Deep Boltzmann Machines , 2009, AISTATS.
[9] Hugo Larochelle,et al. The Neural Autoregressive Distribution Estimator , 2011, AISTATS.
[10] Jiquan Ngiam,et al. Learning Deep Energy Models , 2011, ICML.
[11] Karol Gregor,et al. Neural Variational Inference and Learning in Belief Networks , 2014, ICML.
[12] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[13] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[14] Daan Wierstra,et al. Deep AutoRegressive Networks , 2013, ICML.
[15] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[16] Yoshua Bengio,et al. Reweighted Wake-Sleep , 2014, ICLR.
[17] Ruslan Salakhutdinov,et al. Accurate and conservative estimates of MRF log-likelihood using reverse annealing , 2014, AISTATS.
[18] Alex Graves,et al. DRAW: A Recurrent Neural Network For Image Generation , 2015, ICML.
[19] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[20] Alex Graves,et al. Conditional Image Generation with PixelCNN Decoders , 2016, NIPS.
[21] Andriy Mnih,et al. Variational Inference for Monte Carlo Objectives , 2016, ICML.
[22] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[24] Tim Salimans,et al. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.
[25] Ruslan Salakhutdinov,et al. Importance Weighted Autoencoders , 2015, ICLR.
[26] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.
[27] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[28] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[29] Vincent Dumoulin,et al. Deconvolution and Checkerboard Artifacts , 2016 .
[30] Zhe Gan,et al. Variational Autoencoder for Deep Learning of Images, Labels and Captions , 2016, NIPS.
[31] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[32] Philip Bachman,et al. Calibrating Energy-based Generative Adversarial Networks , 2017, ICLR.
[33] Valero Laparra,et al. End-to-end Optimized Image Compression , 2016, ICLR.
[34] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[35] David Vázquez,et al. PixelVAE: A Latent Variable Model for Natural Images , 2016, ICLR.
[36] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Pieter Abbeel,et al. Variational Lossy Autoencoder , 2016, ICLR.
[38] Yingyu Liang,et al. Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.
[39] Xi Chen,et al. PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications , 2017, ICLR.
[40] Aaron C. Courville,et al. CALIBRATING ENERGY-BASED GENERATIVE ADVER- , 2017 .
[41] Max Welling,et al. Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.
[42] Zhiting Hu,et al. Improved Variational Autoencoders for Text Modeling using Dilated Convolutions , 2017, ICML.
[43] Ruslan Salakhutdinov,et al. On the Quantitative Analysis of Decoder-Based Generative Models , 2016, ICLR.