Learning to activate logic rules for textual reasoning

Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the "Image Schema" in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts.

[1]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[2]  Jason Weston,et al.  End-To-End Memory Networks , 2015, NIPS.

[3]  Sergio Gomez Colmenarejo,et al.  Hybrid computing using a neural network with dynamic external memory , 2016, Nature.

[4]  Richard Socher,et al.  Dynamic Memory Networks for Visual and Textual Question Answering , 2016, ICML.

[5]  Mark L. Johnson The body in the mind: the bodily basis of meaning , 1987 .

[6]  G. Lakoff Women, fire, and dangerous things : what categories reveal about the mind , 1989 .

[7]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[8]  Konrad P. Körding,et al.  Toward an Integration of Deep Learning and Neuroscience , 2016, bioRxiv.

[9]  Jason Weston,et al.  Memory Networks , 2014, ICLR.

[10]  Marcin Andrychowicz,et al.  Neural Random Access Machines , 2015, ERCIM News.

[11]  Ronald J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[12]  Razvan Pascanu,et al.  A simple neural network module for relational reasoning , 2017, NIPS.

[13]  Jianfeng Gao,et al.  Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access , 2016, ACL.

[14]  Richard Socher,et al.  Ask Me Anything: Dynamic Memory Networks for Natural Language Processing , 2015, ICML.

[15]  Daniel D. Johnson,et al.  Learning Graphical State Transitions , 2016, ICLR.

[16]  Jason Weston,et al.  Tracking the World State with Recurrent Entity Networks , 2016, ICLR.

[17]  Eric P. Xing,et al.  Harnessing Deep Neural Networks with Logic Rules , 2016, ACL.