Learning Deep Generative Models With Doubly Stochastic Gradient MCMC
暂无分享,去创建一个
[1] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.
[2] Neural Adaptive Sequential Monte Carlo Supplementary Material , 2015 .
[3] Tianqi Chen,et al. Stochastic Gradient Hamiltonian Monte Carlo , 2014, ICML.
[4] Brendan J. Frey,et al. Graphical Models for Machine Learning and Digital Communication , 1998 .
[5] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[6] Tara N. Sainath,et al. FUNDAMENTAL TECHNOLOGIES IN MODERN SPEECH RECOGNITION Digital Object Identifier 10.1109/MSP.2012.2205597 , 2012 .
[7] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[8] Dustin Tran,et al. Variational Gaussian Process , 2015, ICLR.
[9] Zhe Gan,et al. Learning Deep Sigmoid Belief Networks with Data Augmentation , 2015, AISTATS.
[10] Tapani Raiko,et al. Enhanced Gradient for Training Restricted Boltzmann Machines , 2013, Neural Computation.
[11] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Jeffrey Pennington,et al. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection , 2011, NIPS.
[13] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[14] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[15] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[16] Michael I. Jordan,et al. An Introduction to Variational Methods for Graphical Models , 1999, Machine-mediated learning.
[17] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[18] Julien Cornebise,et al. Weight Uncertainty in Neural Network , 2015, ICML.
[19] Tapani Raiko,et al. Iterative Neural Autoregressive Distribution Estimator NADE-k , 2014, NIPS.
[20] Yoshua Bengio,et al. Deep Generative Stochastic Networks Trainable by Backprop , 2013, ICML.
[21] Alexander Binder,et al. Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[22] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[23] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[24] Ruslan Salakhutdinov,et al. Importance Weighted Autoencoders , 2015, ICLR.
[25] Michael I. Jordan,et al. Mean Field Theory for Sigmoid Belief Networks , 1996, J. Artif. Intell. Res..
[26] Geoffrey E. Hinton,et al. The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.
[27] Kai Fan,et al. High-Order Stochastic Gradient Thermostats for Bayesian Learning of Deep Models , 2015, AAAI.
[28] Nando de Freitas,et al. Inductive Principles for Restricted Boltzmann Machine Learning , 2010, AISTATS.
[29] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[30] Yoshua Bengio,et al. Reweighted Wake-Sleep , 2014, ICLR.
[31] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[32] Yali Amit,et al. Mixtures of Sparse Autoregressive Networks , 2015 .
[33] Yoshua Bengio,et al. Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription , 2012, ICML.
[34] Hugo Larochelle,et al. The Neural Autoregressive Distribution Estimator , 2011, AISTATS.
[35] Daan Wierstra,et al. Deep AutoRegressive Networks , 2013, ICML.
[36] Karol Gregor,et al. Neural Variational Inference and Learning in Belief Networks , 2014, ICML.
[37] Alex Graves,et al. DRAW: A Recurrent Neural Network For Image Generation , 2015, ICML.
[38] Bo Zhang,et al. Learning to Generate with Memory , 2016, ICML.
[39] David J. C. MacKay,et al. A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.
[40] Jen-Tzung Chien,et al. Bayesian Recurrent Neural Network for Language Modeling , 2016, IEEE Transactions on Neural Networks and Learning Systems.
[41] Sean Gerrish,et al. Black Box Variational Inference , 2013, AISTATS.
[42] Ole Winther,et al. Ladder Variational Autoencoders , 2016, NIPS.
[43] Ryan P. Adams,et al. Learning the Structure of Deep Sparse Graphical Models , 2009, AISTATS.
[44] Zhe Gan,et al. Scalable Deep Poisson Factor Analysis for Topic Modeling , 2015, ICML.
[45] H. Robbins. A Stochastic Approximation Method , 1951 .
[46] Pieter Abbeel,et al. Max-margin Classification of Data with Absent Features , 2008, J. Mach. Learn. Res..
[47] Ryan Babbush,et al. Bayesian Sampling Using Stochastic Gradient Thermostats , 2014, NIPS.
[48] Radford M. Neal. Connectionist Learning of Belief Networks , 1992, Artif. Intell..
[49] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[50] Miguel Lázaro-Gredilla,et al. Doubly Stochastic Variational Bayes for non-Conjugate Inference , 2014, ICML.
[51] Hugo Larochelle,et al. A Deep and Tractable Density Estimator , 2013, ICML.
[52] Yee Whye Teh,et al. Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.
[53] Geoffrey E. Hinton,et al. Deep Boltzmann Machines , 2009, AISTATS.
[54] Ruslan Salakhutdinov,et al. Evaluating probabilities under high-dimensional latent variable models , 2008, NIPS.
[55] Andriy Mnih,et al. Variational Inference for Monte Carlo Objectives , 2016, ICML.
[56] Ahn. Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring , 2012 .
[57] Samy Bengio,et al. Modeling High-Dimensional Discrete Data with Multi-Layer Neural Networks , 1999, NIPS.
[58] Ruslan Salakhutdinov,et al. On the quantitative analysis of deep belief networks , 2008, ICML '08.
[59] Pierre Priouret,et al. Adaptive Algorithms and Stochastic Approximations , 1990, Applications of Mathematics.
[60] Joshua B. Tenenbaum,et al. One-shot learning by inverting a compositional causal process , 2013, NIPS.