Neural Consciousness Flow

The ability of reasoning beyond data fitting is substantial to deep learning systems in order to make a leap forward towards artificial general intelligence. A lot of efforts have been made to model neural-based reasoning as an iterative decision-making process based on recurrent networks and reinforcement learning. Instead, inspired by the consciousness prior proposed by Yoshua Bengio, we explore reasoning with the notion of attentive awareness from a cognitive perspective, and formulate it in the form of attentive message passing on graphs, called neural consciousness flow (NeuCFlow). Aiming to bridge the gap between deep learning systems and reasoning, we propose an attentive computation framework with a three-layer architecture, which consists of an unconsciousness flow layer, a consciousness flow layer, and an attention flow layer. We implement the NeuCFlow model with graph neural networks (GNNs) and conditional transition matrices. Our attentive computation greatly reduces the complexity of vanilla GNN-based methods, capable of running on large-scale graphs. We validate our model for knowledge graph reasoning by solving a series of knowledge base completion (KBC) tasks. The experimental results show NeuCFlow significantly outperforms previous state-of-the-art KBC methods, including the embedding-based and the path-based. The reproducible code can be found by the link below.

[1]  K. J. Craik,et al.  The nature of explanation , 1944 .

[2]  Jian-Yun Nie,et al.  RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space , 2018, ICLR.

[3]  D. Gentner,et al.  Structure mapping in analogy and similarity. , 1997 .

[4]  Alexander J. Smola,et al.  Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning , 2017, ICLR.

[5]  Razvan Pascanu,et al.  Learning Deep Generative Models of Graphs , 2018, ICLR 2018.

[6]  C. Koch,et al.  Integrated information theory: from consciousness to its physical substrate , 2016, Nature Reviews Neuroscience.

[7]  David L. Dill,et al.  Learning a SAT Solver from Single-Bit Supervision , 2018, ICLR.

[8]  Lorenzo Rosasco,et al.  Holographic Embeddings of Knowledge Graphs , 2015, AAAI.

[9]  Hoifung Poon,et al.  Compositional Learning of Embeddings for Relation Paths in Knowledge Base and Text , 2016, ACL.

[10]  Jianfeng Gao,et al.  Embedding Entities and Relations for Learning and Inference in Knowledge Bases , 2014, ICLR.

[11]  Daniel D. Johnson,et al.  Learning Graphical State Transitions , 2016, ICLR.

[12]  John Miller,et al.  Traversing Knowledge Graphs in Vector Space , 2015, EMNLP.

[13]  Lexing Xie,et al.  Action Schema Networks: Generalised Policies with Deep Learning , 2017, AAAI.

[14]  Tom M. Mitchell,et al.  Incorporating Vector Space Similarity in Random Walk Inference over Knowledge Bases , 2014, EMNLP.

[15]  Jason Weston,et al.  Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.

[16]  Zhiyuan Liu,et al.  Learning Entity and Relation Embeddings for Knowledge Graph Completion , 2015, AAAI.

[17]  Jure Leskovec,et al.  GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models , 2018, ICML.

[18]  Risi Kondor,et al.  Covariant Compositional Networks For Learning Graphs , 2018, ICLR.

[19]  Dai Quoc Nguyen,et al.  A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network , 2017, NAACL.

[20]  Razvan Pascanu,et al.  Relational Deep Reinforcement Learning , 2018, ArXiv.

[21]  Zhen Wang,et al.  Knowledge Graph Embedding by Translating on Hyperplanes , 2014, AAAI.

[22]  Marc Brockschmidt,et al.  Learning to Represent Programs with Graphs , 2017, ICLR.

[23]  Joan Bruna,et al.  A Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks , 2017, ArXiv.

[24]  Yuji Matsumoto,et al.  Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural Network Approach , 2017, ArXiv.

[25]  Samy Bengio,et al.  Neural Combinatorial Optimization with Reinforcement Learning , 2016, ICLR.

[26]  Daniel Oñoro-Rubio,et al.  Representation Learning for Visual-Relational Knowledge Graphs , 2017, ArXiv.

[27]  Pasquale Minervini,et al.  Convolutional 2D Knowledge Graph Embeddings , 2017, AAAI.

[28]  Yoshua Bengio,et al.  The Consciousness Prior , 2017, ArXiv.

[29]  Rob Fergus,et al.  Learning Multiagent Communication with Backpropagation , 2016, NIPS.

[30]  Max Welling,et al.  Attention Solves Your TSP , 2018, ArXiv.

[31]  Xinlei Chen,et al.  Iterative Visual Reasoning Beyond Convolutions , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[32]  Nicolas Usunier,et al.  Canonical Tensor Decomposition for Knowledge Base Completion , 2018, ICML.

[33]  Razvan Pascanu,et al.  Learning model-based planning from scratch , 2017, ArXiv.

[34]  Ken-ichi Kawarabayashi,et al.  Representation Learning on Graphs with Jumping Knowledge Networks , 2018, ICML.

[35]  Rajarshi Das,et al.  Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks , 2016, EACL.

[36]  K. Holyoak,et al.  A symbolic-connectionist theory of relational inference and generalization. , 2003, Psychological review.

[37]  Razvan Pascanu,et al.  Relational inductive biases, deep learning, and graph networks , 2018, ArXiv.

[38]  Joshua B. Tenenbaum,et al.  Building machines that learn and think like people , 2016, Behavioral and Brain Sciences.

[39]  Danqi Chen,et al.  Observed versus latent features for knowledge base and text inference , 2015, CVSC.

[40]  Abhinav Gupta,et al.  Non-local Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[41]  Razvan Pascanu,et al.  Relational recurrent neural networks , 2018, NeurIPS.

[42]  Ole Winther,et al.  Recurrent Relational Networks , 2017, NeurIPS.

[43]  Guillaume Bouchard,et al.  Complex Embeddings for Simple Link Prediction , 2016, ICML.

[44]  Wenhan Xiong,et al.  DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning , 2017, EMNLP.

[45]  M. Kerszberg,et al.  A Neuronal Model of a Global Workspace in Effortful Cognitive Tasks , 2001 .

[46]  Huanbo Luan,et al.  Modeling Relation Paths for Representation Learning of Knowledge Bases , 2015, EMNLP.

[47]  Razvan Pascanu,et al.  A simple neural network module for relational reasoning , 2017, NIPS.

[48]  Joan Bruna,et al.  Deep Convolutional Networks on Graph-Structured Data , 2015, ArXiv.

[49]  Josh Weisberg,et al.  Higher-order theories of consciousness , 2008, Scholarpedia.

[50]  Samuel S. Schoenholz,et al.  Neural Message Passing for Quantum Chemistry , 2017, ICML.

[51]  Fabian M. Suchanek,et al.  YAGO3: A Knowledge Base from Multilingual Wikipedias , 2015, CIDR.

[52]  Le Song,et al.  2 Common Formulation for Greedy Algorithms on Graphs , 2018 .

[53]  Vijay S. Pande,et al.  Molecular graph convolutions: moving beyond fingerprints , 2016, Journal of Computer-Aided Molecular Design.

[54]  Razvan Pascanu,et al.  Discovering objects and their relations from entangled scene representations , 2017, ICLR.

[55]  Bowen Zhou,et al.  A Structured Self-attentive Sentence Embedding , 2017, ICLR.

[56]  Robert Van Gulick,et al.  4. Higher-order global states (HOGS): An alternative higher-order model of consciousness , 2004 .

[57]  Stephan Günnemann,et al.  NetGAN: Generating Graphs via Random Walks , 2018, ICML.

[58]  Nicola De Cao,et al.  MolGAN: An implicit generative model for small molecular graphs , 2018, ArXiv.

[59]  Mathias Niepert,et al.  Learning Convolutional Neural Networks for Graphs , 2016, ICML.

[60]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[61]  Jun Zhao,et al.  Knowledge Graph Embedding via Dynamic Mapping Matrix , 2015, ACL.

[62]  John R. Anderson Acquisition of cognitive skill. , 1982 .

[63]  Joshua B. Tenenbaum,et al.  A Compositional Object-Based Approach to Learning Physical Dynamics , 2016, ICLR.

[64]  R. Zemel,et al.  Neural Relational Inference for Interacting Systems , 2018, ICML.

[65]  Razvan Pascanu,et al.  Metacontrol for Adaptive Imagination-Based Optimization , 2017, ICLR.

[66]  Pierre Vandergheynst,et al.  Geometric Deep Learning: Going beyond Euclidean data , 2016, IEEE Signal Process. Mag..

[67]  Sanja Fidler,et al.  NerveNet: Learning Structured Policy with Graph Neural Networks , 2018, ICLR.

[68]  Jessica B. Hamrick,et al.  Relational inductive bias for physical construction in humans and machines , 2018, CogSci.

[69]  Razvan Pascanu,et al.  Interaction Networks for Learning about Objects, Relations and Physics , 2016, NIPS.

[70]  Raia Hadsell,et al.  Graph networks as learnable physics engines for inference and control , 2018, ICML.

[71]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[72]  Chen Liang,et al.  Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision , 2016, ACL.

[73]  Razvan Pascanu,et al.  Visual Interaction Networks: Learning a Physics Simulator from Video , 2017, NIPS.

[74]  Tom M. Mitchell,et al.  Random Walk Inference and Learning in A Large Scale Knowledge Base , 2011, EMNLP.

[75]  Richard S. Zemel,et al.  Gated Graph Sequence Neural Networks , 2015, ICLR.

[76]  Xavier Bresson,et al.  Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering , 2016, NIPS.

[77]  Yedid Hoshen,et al.  VAIN: Attentional Multi-agent Predictive Modeling , 2017, NIPS.

[78]  Alán Aspuru-Guzik,et al.  Convolutional Networks on Graphs for Learning Molecular Fingerprints , 2015, NIPS.

[79]  Joan Bruna,et al.  Spectral Networks and Locally Connected Networks on Graphs , 2013, ICLR.

[80]  Ah Chung Tsoi,et al.  The Graph Neural Network Model , 2009, IEEE Transactions on Neural Networks.

[81]  Fan Yang,et al.  Differentiable Learning of Logical Rules for Knowledge Base Reasoning , 2017, NIPS.

[82]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[83]  William W. Cohen TensorLog: A Differentiable Deductive Database , 2016, ArXiv.

[84]  Yelong Shen,et al.  M-Walk: Learning to Walk over Graphs using Monte Carlo Tree Search , 2018, NeurIPS.