暂无分享,去创建一个
[1] Charles H. Bennett,et al. Efficient estimation of free energy differences from Monte Carlo data , 1976 .
[2] Paul Smolensky,et al. Information processing in dynamical systems: foundations of harmony theory , 1986 .
[3] Wang,et al. Replica Monte Carlo simulation of spin glasses. , 1986, Physical review letters.
[4] Judea Pearl,et al. Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.
[5] David J. Spiegelhalter,et al. Sequential updating of conditional probabilities on directed graphical structures , 1990, Networks.
[6] Radford M. Neal. Connectionist Learning of Belief Networks , 1992, Artif. Intell..
[7] Geoffrey E. Hinton,et al. Autoencoders, Minimum Description Length and Helmholtz Free Energy , 1993, NIPS.
[8] Michael Luby,et al. Approximating Probabilistic Inference in Bayesian Belief Networks is NP-Hard , 1993, Artif. Intell..
[9] David J. Field,et al. Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.
[10] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[11] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[12] Michael I. Jordan,et al. An Introduction to Variational Methods for Graphical Models , 1999, Machine Learning.
[13] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[14] Juha Karhunen,et al. Building Blocks for Variational Bayesian Learning of Latent Variable Models , 2007, J. Mach. Learn. Res..
[15] Ruslan Salakhutdinov,et al. On the quantitative analysis of deep belief networks , 2008, ICML '08.
[16] Ruslan Salakhutdinov,et al. Evaluating probabilities under high-dimensional latent variable models , 2008, NIPS.
[17] Tijmen Tieleman,et al. Training restricted Boltzmann machines using approximations to the likelihood gradient , 2008, ICML '08.
[18] Michael R. Shirts,et al. Statistically optimal analysis of samples from multiple equilibrium states. , 2008, The Journal of chemical physics.
[19] Geoffrey E. Hinton,et al. Deep Boltzmann Machines , 2009, AISTATS.
[20] Nando de Freitas,et al. Inductive Principles for Restricted Boltzmann Machine Learning , 2010, AISTATS.
[21] Rocco A. Servedio,et al. Restricted Boltzmann Machines are Hard to Approximately Evaluate or Simulate , 2010, ICML.
[22] Hugo Larochelle,et al. The Neural Autoregressive Distribution Estimator , 2011, AISTATS.
[23] Yoshua Bengio,et al. Unsupervised Models of Images by Spikeand-Slab RBMs , 2011, ICML.
[24] Michael I. Jordan,et al. Variational Bayesian Inference with Stochastic Search , 2012, ICML.
[25] Tapani Raiko,et al. Enhanced Gradient for Training Restricted Boltzmann Machines , 2013, Neural Computation.
[26] Joshua B. Tenenbaum,et al. One-shot learning by inverting a compositional causal process , 2013, NIPS.
[27] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[28] Brendan J. Frey,et al. Adaptive dropout for training deep neural networks , 2013, NIPS.
[29] Karol Gregor,et al. Neural Variational Inference and Learning in Belief Networks , 2014, ICML.
[30] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[31] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[32] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[33] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[34] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[35] Daan Wierstra,et al. Deep AutoRegressive Networks , 2013, ICML.
[36] Yoshua Bengio,et al. A Recurrent Latent Variable Model for Sequential Data , 2015, NIPS.
[37] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[38] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[39] Max Welling,et al. Markov Chain Monte Carlo and Variational Inference: Bridging the Gap , 2014, ICML.
[40] Yoshua Bengio,et al. Reweighted Wake-Sleep , 2014, ICLR.
[41] Ruslan Salakhutdinov,et al. Accurate and conservative estimates of MRF log-likelihood using reverse annealing , 2014, AISTATS.
[42] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[43] Alex Graves,et al. DRAW: A Recurrent Neural Network For Image Generation , 2015, ICML.
[44] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[45] Tapani Raiko,et al. Techniques for Learning Binary Stochastic Feedforward Neural Networks , 2014, ICLR.
[46] Bo Zhang,et al. Learning Deep Generative Models with Doubly Stochastic MCMC , 2015, ArXiv.
[47] Andriy Mnih,et al. Variational Inference for Monte Carlo Objectives , 2016, ICML.
[48] Ruslan Salakhutdinov,et al. Importance Weighted Autoencoders , 2015, ICLR.
[49] Richard E. Turner,et al. Variational Inference with Rényi Divergence , 2016, ArXiv.
[50] Ole Winther,et al. Ladder Variational Autoencoders , 2016, NIPS.
[51] Alex Graves,et al. Stochastic Backpropagation through Mixture Density Distributions , 2016, ArXiv.
[52] Ryan P. Adams,et al. Composing graphical models with neural networks for structured representations and fast inference , 2016, NIPS.
[53] Yoshua Bengio,et al. Bidirectional Helmholtz Machines , 2015, ICML.
[54] Samy Bengio,et al. Generating Sentences from a Continuous Space , 2015, CoNLL.
[55] Dustin Tran,et al. Variational Gaussian Process , 2015, ICLR.
[56] Tim Salimans,et al. A Structured Variational Auto-encoder for Learning Deep Hierarchies of Sparse Features , 2016, ArXiv.