Dropout: a simple way to prevent neural networks from overfitting
暂无分享,去创建一个
Nitish Srivastava | Geoffrey E. Hinton | Ruslan Salakhutdinov | Ilya Sutskever | Alex Krizhevsky | R. Salakhutdinov | A. Krizhevsky | Ilya Sutskever | Nitish Srivastava | I. Sutskever
[1] Mario Bertero,et al. The Stability of Inverse Problems , 1980 .
[2] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[3] Geoffrey E. Hinton,et al. Simplifying Neural Networks by Soft Weight-Sharing , 1992, Neural Computation.
[4] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[5] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[6] Patrice Y. Simard,et al. Best practices for convolutional neural networks applied to visual document analysis , 2003, Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings..
[7] Adi Shraibman,et al. Rank, Trace-Norm and Max-Norm , 2005, COLT.
[8] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[9] Amir Globerson,et al. Nightmare at test time: robust learning by feature deletion , 2006, ICML.
[10] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[11] Ruslan Salakhutdinov,et al. Bayesian probabilistic matrix factorization using Markov chain Monte Carlo , 2008, ICML '08.
[12] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[13] Ohad Shamir,et al. Learning to classify with missing and corrupted features , 2008, ICML '08.
[14] Volodymyr Mnih,et al. CUDAMat: a CUDA-based matrix class for Python , 2009 .
[15] Yann LeCun,et al. What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.
[16] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[17] Geoffrey E. Hinton,et al. Deep Boltzmann Machines , 2009, AISTATS.
[18] Simon King,et al. IEEE Workshop on automatic speech recognition and understanding , 2009 .
[19] Adi Livnat,et al. Sex, mixability, and modularity , 2010, Proceedings of the National Academy of Sciences.
[20] Geoffrey E. Hinton,et al. Phone Recognition with the Mean-Covariance Restricted Boltzmann Machine , 2010, NIPS.
[21] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[22] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[23] Brendan J. Frey,et al. Bayesian prediction of tissue-regulated splicing using RNA sequence and cellular context , 2011, Bioinform..
[24] Daniel Povey,et al. The Kaldi Speech Recognition Toolkit , 2011 .
[25] Florent Perronnin,et al. High-dimensional signature compression for large-scale image classification , 2011, CVPR 2011.
[26] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[27] Kilian Q. Weinberger,et al. Marginalized Denoising Autoencoders for Domain Adaptation , 2012, ICML.
[28] Yann LeCun,et al. Convolutional neural networks applied to house numbers digit classification , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).
[29] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[30] Geoffrey E. Hinton,et al. Acoustic Modeling Using Deep Belief Networks , 2012, IEEE Transactions on Audio, Speech, and Language Processing.
[31] Rob Fergus,et al. Stochastic Pooling for Regularization of Deep Convolutional Neural Networks , 2013, ICLR.
[32] Stephen Tyree,et al. Learning with Marginalized Corrupted Features , 2013, ICML.
[33] Nitish Srivastava,et al. Improving Neural Networks with Dropout , 2013 .
[34] Yoshua Bengio,et al. Maxout Networks , 2013, ICML.
[35] Sida I. Wang,et al. Dropout Training as Adaptive Regularization , 2013, NIPS.
[36] Christopher D. Manning,et al. Fast dropout training , 2013, ICML.