Marginalized Denoising Auto-encoders for Nonlinear Representations
暂无分享,去创建一个
Yoshua Bengio | Kilian Q. Weinberger | Minmin Chen | Fei Sha | Yoshua Bengio | Fei Sha | Minmin Chen
[1] Yoshua Bengio,et al. An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.
[2] Yoshua Bengio,et al. Scaling learning algorithms towards AI , 2007 .
[3] Pascal Vincent,et al. Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.
[4] Yoshua Bengio,et al. Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach , 2011, ICML.
[5] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[6] Lin Xiao,et al. Hierarchical Classification via Orthogonal Transfer , 2011, ICML.
[7] Kristen Grauman,et al. Learning a Tree of Metrics with Disjoint Visual Features , 2011, NIPS.
[8] Patrick P. Bergmans,et al. A simple converse for broadcast channels with additive white Gaussian noise (Corresp.) , 1974, IEEE Trans. Inf. Theory.
[9] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[10] Honglak Lee,et al. Unsupervised feature learning for audio classification using convolutional deep belief networks , 2009, NIPS.
[11] R. Fergus,et al. Learning invariant features through topographic filter maps , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[12] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[13] Thore Graepel,et al. Invariant Pattern Recognition by Semi-Definite Programming Machines , 2003, NIPS.
[14] Paul Lamere,et al. Steerable Playlist Generation by Learning Song Similarity from Radio Station Playlists , 2009, ISMIR.
[15] Yixin Chen,et al. Automatic Feature Decomposition for Single View Co-training , 2011, ICML.
[16] Jürgen Schmidhuber,et al. Multi-column deep neural networks for image classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[17] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[18] Marc'Aurelio Ranzato,et al. Sparse Feature Learning for Deep Belief Networks , 2007, NIPS.
[19] Kilian Q. Weinberger,et al. Marginalized Denoising Autoencoders for Domain Adaptation , 2012, ICML.
[20] Nitish Srivastava,et al. Improving Neural Networks with Dropout , 2013 .
[21] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[22] Honglak Lee,et al. Sparse deep belief net model for visual area V2 , 2007, NIPS.
[23] Stephen Tyree,et al. Marginalizing Corrupted Features , 2014, ArXiv.
[24] Chih-Jen Lin,et al. LIBLINEAR: A Library for Large Linear Classification , 2008, J. Mach. Learn. Res..
[25] Klaus-Robert Müller,et al. Efficient BackProp , 2012, Neural Networks: Tricks of the Trade.
[26] Kurt Hornik,et al. Neural networks and principal component analysis: Learning from examples without local minima , 1989, Neural Networks.
[27] Sida I. Wang,et al. Dropout Training as Adaptive Regularization , 2013, NIPS.
[28] Yoshua Bengio,et al. Unsupervised and Transfer Learning Challenge: a Deep Learning Approach , 2011, ICML Unsupervised and Transfer Learning.
[29] Christopher M. Bishop,et al. Training with Noise is Equivalent to Tikhonov Regularization , 1995, Neural Computation.
[30] Pascal Vincent,et al. Adding noise to the input of a model trained with a regularized objective , 2011, ArXiv.
[31] Yoshua Bengio,et al. What regularized auto-encoders learn from the data-generating distribution , 2012, J. Mach. Learn. Res..
[32] Bernhard Schölkopf,et al. Improving the Accuracy and Speed of Support Vector Machines , 1996, NIPS.
[33] Christopher D. Manning,et al. Fast dropout training , 2013, ICML.