Nested LSTMs
暂无分享,去创建一个
[1] Sepp Hochreiter,et al. Untersuchungen zu dynamischen neuronalen Netzen , 1991 .
[2] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[3] Yoshua Bengio,et al. Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.
[4] Yoshua Bengio,et al. Hierarchical Recurrent Neural Networks for Long-Term Dependencies , 1995, NIPS.
[5] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[6] Doina Precup,et al. Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..
[7] Jürgen Schmidhuber,et al. Learning to Forget: Continual Prediction with LSTM , 2000, Neural Computation.
[8] Doina Precup,et al. Learning Options in Reinforcement Learning , 2002, SARA.
[9] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[10] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[11] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons , 2013, ArXiv.
[12] Alex Graves,et al. Generating Sequences With Recurrent Neural Networks , 2013, ArXiv.
[13] Benjamin Schrauwen,et al. Training and Analysing Deep Recurrent Neural Networks , 2013, NIPS.
[14] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[15] Mirella Lapata,et al. Chinese Poetry Generation with Recurrent Neural Networks , 2014, EMNLP.
[16] Erich Elsen,et al. Deep Speech: Scaling up end-to-end speech recognition , 2014, ArXiv.
[17] Razvan Pascanu,et al. How to Construct Deep Recurrent Neural Networks , 2013, ICLR.
[18] Jürgen Schmidhuber,et al. A Clockwork RNN , 2014, ICML.
[19] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[20] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[21] Lihong Li,et al. PAC-inspired Option Discovery in Lifelong Reinforcement Learning , 2014, ICML.
[22] Fei-Fei Li,et al. Visualizing and Understanding Recurrent Networks , 2015, ArXiv.
[23] Wojciech Zaremba,et al. An Empirical Exploration of Recurrent Network Architectures , 2015, ICML.
[24] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[25] Yoshua Bengio,et al. Gated Feedback Recurrent Neural Networks , 2015, ICML.
[26] Tomas Mikolov,et al. Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets , 2015, NIPS.
[27] Phil Blunsom,et al. Learning to Transduce with Unbounded Memory , 2015, NIPS.
[28] Wojciech Zaremba,et al. Reinforcement Learning Neural Turing Machines , 2015, ArXiv.
[29] Mirella Lapata,et al. Long Short-Term Memory-Networks for Machine Reading , 2016, EMNLP.
[30] Alex Graves,et al. Grid Long Short-Term Memory , 2015, ICLR.
[31] John Salvatier,et al. Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.
[32] Kamil M Rocki,et al. Recurrent Memory Array Structures , 2016, ArXiv.
[33] Murray Shanahan,et al. Classifying Options for Deep Reinforcement Learning , 2016, ArXiv.
[34] Chong Wang,et al. Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin , 2015, ICML.
[35] Geoffrey E. Hinton,et al. Using Fast Weights to Attend to the Recent Past , 2016, NIPS.
[36] Alex Graves,et al. Associative Long Short-Term Memory , 2016, ICML.
[37] Yoshua Bengio,et al. Architectural Complexity Measures of Recurrent Neural Networks , 2016, NIPS.
[38] Roland Memisevic,et al. Regularizing RNNs by Stabilizing Activations , 2015, ICLR.
[39] Doina Precup,et al. The Option-Critic Architecture , 2016, AAAI.
[40] Lantao Yu,et al. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.
[41] Yoshua Bengio,et al. Maximum-Likelihood Augmented Discrete Generative Adversarial Networks , 2017, ArXiv.
[42] Yoshua Bengio,et al. Hierarchical Multiscale Recurrent Neural Networks , 2016, ICLR.
[43] Aaron C. Courville,et al. Recurrent Batch Normalization , 2016, ICLR.
[44] Yoshua Bengio,et al. Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations , 2016, ICLR.
[45] Jürgen Schmidhuber,et al. LSTM: A Search Space Odyssey , 2015, IEEE Transactions on Neural Networks and Learning Systems.