暂无分享,去创建一个
[1] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[2] Samy Bengio,et al. Modeling High-Dimensional Discrete Data with Multi-Layer Neural Networks , 1999, NIPS.
[3] Lawrence Cayton,et al. Algorithms for manifold learning , 2005 .
[4] B. Schölkopf,et al. Modeling Human Motion Using Binary Latent Variables , 2007 .
[5] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[6] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[7] Botond Cseke,et al. Advances in Neural Information Processing Systems 20 (NIPS 2007) , 2008 .
[8] Yoshua Bengio,et al. Zero-data Learning of New Tasks , 2008, AAAI.
[9] Geoffrey E. Hinton,et al. Factored conditional restricted Boltzmann Machines for modeling motion style , 2009, ICML '09.
[10] Geoffrey E. Hinton,et al. Deep Boltzmann Machines , 2009, AISTATS.
[11] Hariharan Narayanan,et al. Sample Complexity of Testing the Manifold Hypothesis , 2010, NIPS.
[12] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[13] Hugo Larochelle,et al. The Neural Autoregressive Distribution Estimator , 2011, AISTATS.
[14] Yoshua Bengio,et al. Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription , 2012, ICML.
[15] Kevin P. Murphy,et al. Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.
[16] Yoshua Bengio,et al. Deep Learning of Representations for Unsupervised and Transfer Learning , 2011, ICML Unsupervised and Transfer Learning.
[17] Yoshua Bengio,et al. Unsupervised and Transfer Learning Challenge: a Deep Learning Approach , 2011, ICML Unsupervised and Transfer Learning.
[18] Razvan Pascanu,et al. Theano: new features and speed improvements , 2012, ArXiv.
[19] Yoshua Bengio,et al. Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery , 2012, ArXiv.
[20] Hossein Mobahi,et al. Deep Learning via Semi-supervised Embedding , 2012, Neural Networks: Tricks of the Trade.
[21] Hugo Larochelle,et al. RNADE: The real-valued neural autoregressive density-estimator , 2013, NIPS.
[22] Joshua B. Tenenbaum,et al. One-shot learning by inverting a compositional causal process , 2013, NIPS.
[23] Yoshua Bengio,et al. Multi-Prediction Deep Boltzmann Machines , 2013, NIPS.
[24] Ian J. Goodfellow,et al. Pylearn2: a machine learning research library , 2013, ArXiv.
[25] Peter Glöckner,et al. Why Does Unsupervised Pre-training Help Deep Learning? , 2013 .
[26] Yoshua Bengio,et al. Better Mixing via Deep Representations , 2012, ICML.
[27] Pascal Vincent,et al. Generalized Denoising Auto-Encoders as Generative Models , 2013, NIPS.
[28] Yoshua Bengio,et al. What regularized auto-encoders learn from the data-generating distribution , 2012, J. Mach. Learn. Res..
[29] Li Yao,et al. Bounding the Test Log-Likelihood of Generative Models , 2014, ICLR.
[30] Yoshua Bengio,et al. Deep Generative Stochastic Networks Trainable by Backprop , 2013, ICML.
[31] Hugo Larochelle,et al. A Deep and Tractable Density Estimator , 2013, ICML.
[32] Yoshua Bengio,et al. Knowledge Matters: Importance of Prior Information for Optimization , 2013, J. Mach. Learn. Res..