暂无分享,去创建一个
Daniel Kifer | C. L. Giles | Alexander Ororbia | Ankur Mali | C. Lee Giles | Daniel Kifer | Alexander Ororbia | A. Mali
[1] Andrew Y. Ng,et al. Parsing Natural Scenes and Natural Language with Recursive Neural Networks , 2011, ICML.
[2] Phil Blunsom,et al. Learning to Transduce with Unbounded Memory , 2015, NIPS.
[3] C. Lee Giles,et al. The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations , 2017, ArXiv.
[4] Hugo Larochelle,et al. Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality , 2015, CVSC.
[5] Svetha Venkatesh,et al. Graph Memory Networks for Molecular Activity Prediction , 2018, 2018 24th International Conference on Pattern Recognition (ICPR).
[6] John Stogin,et al. Provably Stable Interpretable Encodings of Context Free Grammars in RNNs with a Differentiable Stack , 2020, ArXiv.
[7] Jason Weston,et al. End-To-End Memory Networks , 2015, NIPS.
[8] Jeffrey D. Ullman,et al. Introduction to Automata Theory, Languages and Computation , 1979 .
[9] Steve Renals,et al. Multiplicative LSTM for sequence modelling , 2016, ICLR.
[10] Yonatan Belinkov,et al. Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages , 2019, ArXiv.
[11] Anima Anandkumar,et al. Memory Augmented Recursive Neural Networks , 2019, ArXiv.
[12] Samuel R. Bowman. Can recursive neural tensor networks learn logical reasoning? , 2014, ICLR.
[13] C. Lee Giles,et al. The Neural Network Pushdown Automaton: Architecture, Dynamics and Training , 1997, Summer School on Neural Networks.
[14] Anima Anandkumar,et al. Combining Symbolic Expressions and Black-box Function Evaluations in Neural Programs , 2018, ICLR.
[15] C. Lee Giles,et al. Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks , 1993 .
[16] Tomas Mikolov,et al. Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets , 2015, NIPS.
[17] Zenon W. Pylyshyn,et al. Connectionism and cognitive architecture: A critical analysis , 1988, Cognition.
[18] Ying Zhang,et al. On Multiplicative Integration with Recurrent Neural Networks , 2016, NIPS.
[19] Yonatan Belinkov,et al. LSTM Networks Can Perform Dynamic Counting , 2019, Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges.
[20] Michael Hahn,et al. Theoretical Limitations of Self-Attention in Neural Sequence Models , 2019, TACL.
[21] Jeffrey D. Ullman,et al. Introduction to automata theory, languages, and computation, 2nd edition , 2001, SIGA.
[22] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[23] Marco Baroni,et al. Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks , 2017, ICML.
[24] Colin Giles,et al. Learning Context-free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory (cid:3) , 1992 .
[25] C. Lee Giles,et al. The Neural State Pushdown Automata , 2019, ArXiv.
[26] Alex Graves,et al. Neural Turing Machines , 2014, ArXiv.
[27] Daniel Kifer,et al. Recognizing Long Grammatical Sequences Using Recurrent Networks Augmented With An External Differentiable Stack , 2020, ICGI.
[28] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[29] R. Fergus,et al. Learning to Discover Efficient Mathematical Identities , 2014, NIPS.
[30] Dawn Xiaodong Song,et al. Making Neural Programming Architectures Generalize via Recursion , 2017, ICLR.
[31] Christopher D. Manning,et al. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks , 2015, ACL.
[32] C. Lee Giles,et al. Higher Order Recurrent Networks and Grammatical Inference , 1989, NIPS.
[33] Sridha Sridharan,et al. Tree Memory Networks for Modelling Long-term Temporal Dependencies , 2017, Neurocomputing.
[34] Richard Socher,et al. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing , 2015, ICML.
[35] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[36] Guillaume Lample,et al. Deep Learning for Symbolic Mathematics , 2019, ICLR.
[37] Pushmeet Kohli,et al. Learning Continuous Semantic Representations of Symbolic Expressions , 2016, ICML.
[38] Mathijs Mul,et al. Compositionality Decomposed: How do Neural Networks Generalise? , 2019, J. Artif. Intell. Res..
[39] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[40] Sergio Gomez Colmenarejo,et al. Hybrid computing using a neural network with dynamic external memory , 2016, Nature.
[41] Pushmeet Kohli,et al. Analysing Mathematical Reasoning Abilities of Neural Models , 2019, ICLR.
[42] Jason Weston,et al. Memory Networks , 2014, ICLR.
[43] Richard Evans,et al. Can Neural Networks Understand Logical Entailment? , 2018, ICLR.
[44] Christopher Potts,et al. Recursive Neural Networks Can Learn Logical Semantics , 2014, CVSC.
[45] C. Lee Giles,et al. Constructing deterministic finite-state automata in recurrent neural networks , 1996, JACM.
[46] Willem H. Zuidema,et al. Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure , 2017, J. Artif. Intell. Res..
[47] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[48] Doina Precup,et al. Connecting Weighted Automata and Recurrent Neural Networks through Spectral Learning , 2018, AISTATS.
[49] Danqi Chen,et al. Reasoning With Neural Tensor Networks for Knowledge Base Completion , 2013, NIPS.