暂无分享,去创建一个
Yoshua Bengio | Hugo Larochelle | Tegan Maharaj | Aaron C. Courville | Anirudh Goyal | Aaron Courville | Chris Pal | Nicolas Ballas | Nan Rosemary Ke | David Krueger | J'anos Kram'ar | Mohammad Pezeshki | Yoshua Bengio | H. Larochelle | Anirudh Goyal | Nicolas Ballas | C. Pal | M. Pezeshki | David Krueger | Tegan Maharaj | J'anos Kram'ar | Chris Pal
[1] Sepp Hochreiter,et al. Untersuchungen zu dynamischen neuronalen Netzen , 1991 .
[2] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[3] Yoshua Bengio,et al. Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.
[4] Yoshua Bengio,et al. Hierarchical Recurrent Neural Networks for Long-Term Dependencies , 1995, NIPS.
[5] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[6] Jürgen Schmidhuber,et al. Learning to Forget: Continual Prediction with LSTM , 2000, Neural Computation.
[7] Risto Miikkulainen,et al. Test Data , 2019, Encyclopedia of Machine Learning and Data Mining.
[8] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[9] Razvan Pascanu,et al. Understanding the exploding gradient problem , 2012, ArXiv.
[10] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[11] Christopher D. Manning,et al. Fast dropout training , 2013, ICML.
[12] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[13] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[14] Razvan Pascanu,et al. How to Construct Deep Recurrent Neural Networks , 2013, ICLR.
[15] Jürgen Schmidhuber,et al. A Clockwork RNN , 2014, ICML.
[16] Christian Osendorfer,et al. On Fast Dropout and its Applicability to Recurrent Networks , 2013, ICLR.
[17] Christopher Kermorvant,et al. Dropout Improves Recurrent Neural Networks for Handwriting Recognition , 2013, 2014 14th International Conference on Frontiers in Handwriting Recognition.
[18] Wojciech Zaremba,et al. Recurrent Neural Network Regularization , 2014, ArXiv.
[19] Inchul Song,et al. RNNDROP: A novel dropout for RNNS in ASR , 2015, 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU).
[20] Yoshua Bengio,et al. Blocks and Fuel: Frameworks for deep learning , 2015, ArXiv.
[21] Christopher Joseph Pal,et al. Describing Videos by Exploiting Temporal Structure , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[22] Marc'Aurelio Ranzato,et al. Learning Longer Memory in Recurrent Neural Networks , 2014, ICLR.
[23] Yajie Miao,et al. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding , 2015, 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU).
[24] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[25] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[26] Geoffrey E. Hinton,et al. A Simple Way to Initialize Recurrent Networks of Rectified Linear Units , 2015, ArXiv.
[27] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[28] Zoubin Ghahramani,et al. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks , 2015, NIPS.
[29] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Kilian Q. Weinberger,et al. Deep Networks with Stochastic Depth , 2016, ECCV.
[31] Tomasz Kornuta,et al. Surprisal-Driven Zoneout , 2016, 1610.07675.
[32] John Salvatier,et al. Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.
[33] Alexander M. Rush,et al. Character-Aware Neural Language Models , 2015, AAAI.
[34] Xinyun Chen. Under Review as a Conference Paper at Iclr 2017 Delving into Transferable Adversarial Ex- Amples and Black-box Attacks , 2016 .
[35] Geoffrey E. Hinton,et al. Layer Normalization , 2016, ArXiv.
[36] Erhardt Barth,et al. Recurrent Dropout without Memory Loss , 2016, COLING.
[37] Roland Memisevic,et al. Regularizing RNNs by Stabilizing Activations , 2015, ICLR.
[38] David A. Forsyth,et al. Swapout: Learning an ensemble of deep architectures , 2016, NIPS.
[39] Yoshua Bengio,et al. Hierarchical Multiscale Recurrent Neural Networks , 2016, ICLR.
[40] Aaron C. Courville,et al. Recurrent Batch Normalization , 2016, ICLR.