A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
暂无分享,去创建一个
[1] David J. C. MacKay,et al. A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.
[2] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[3] Geoffrey E. Hinton,et al. Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.
[4] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[5] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[6] Charles M. Bishop,et al. Ensemble learning in Bayesian neural networks , 1998 .
[7] Bo Pang,et al. Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales , 2005, ACL.
[8] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[9] Stefan J. Kiebel,et al. Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks , 2012, Biological Cybernetics.
[10] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[11] Hermann Ney,et al. LSTM Neural Networks for Language Modeling , 2012, INTERSPEECH.
[12] Geoffrey E. Hinton,et al. Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[13] Maneesh Sahani,et al. Regularization and nonlinearities for neural language models: when are they needed? , 2013, ArXiv.
[14] Phil Blunsom,et al. Recurrent Continuous Translation Models , 2013, EMNLP.
[15] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[16] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[17] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[18] Jen-Tzung Chien,et al. Bayesian recurrent neural network language model , 2014, 2014 IEEE Spoken Language Technology Workshop (SLT).
[19] Christian Osendorfer,et al. On Fast Dropout and its Applicability to Recurrent Networks , 2013, ICLR.
[20] Christopher Kermorvant,et al. Dropout Improves Recurrent Neural Networks for Handwriting Recognition , 2013, 2014 14th International Conference on Frontiers in Handwriting Recognition.
[21] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[22] Wojciech Zaremba,et al. Recurrent Neural Network Regularization , 2014, ArXiv.
[23] Zoubin Ghahramani,et al. Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference , 2015, ArXiv.
[24] Inchul Song,et al. RNNDROP: A novel dropout for RNNS in ASR , 2015, 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU).
[25] Julien Cornebise,et al. Weight Uncertainty in Neural Network , 2015, ICML.
[26] Vivek Rathod,et al. Bayesian dark knowledge , 2015, NIPS.
[27] Ryan P. Adams,et al. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks , 2015, ICML.
[28] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[29] Christopher Kermorvant,et al. Where to apply dropout in recurrent neural networks for handwriting recognition? , 2015, 2015 13th International Conference on Document Analysis and Recognition (ICDAR).
[30] Masashi Sugiyama,et al. Bayesian Dark Knowledge , 2015 .
[31] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[32] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[33] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.