暂无分享,去创建一个
[1] Yoshua Bengio,et al. Deep Generative Stochastic Networks Trainable by Backprop , 2013, ICML.
[2] Pascal Vincent,et al. Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machines , 2010, AISTATS.
[3] Geoffrey E. Hinton,et al. Deep Boltzmann Machines , 2009, AISTATS.
[4] Ruslan Salakhutdinov,et al. Evaluating probabilities under high-dimensional latent variable models , 2008, NIPS.
[5] Bernhard Schölkopf,et al. A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..
[6] Yoshua Bengio,et al. A Generative Process for sampling Contractive Auto-Encoders , 2012, ICML 2012.
[7] Max Welling,et al. Herding Dynamic Weights for Partially Observed Random Field Models , 2009, UAI.
[8] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons , 2013, ArXiv.
[9] Yoshua Bengio,et al. Better Mixing via Deep Representations , 2012, ICML.
[10] Pascal Vincent,et al. Quickly Generating Representative Samples from an RBM-Derived Process , 2011, Neural Computation.
[11] Razvan Pascanu,et al. Theano: new features and speed improvements , 2012, ArXiv.
[12] Ruslan Salakhutdinov,et al. On the quantitative analysis of deep belief networks , 2008, ICML '08.
[13] Pascal Vincent,et al. Generalized Denoising Auto-Encoders as Generative Models , 2013, NIPS.
[14] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[15] Radford M. Neal. Annealed importance sampling , 1998, Stat. Comput..
[16] Tapani Raiko,et al. Enhanced Gradient for Training Restricted Boltzmann Machines , 2013, Neural Computation.