Simulating and Predicting Dynamical Systems With Spatial Semantic Pointers

While neural networks are highly effective at learning task-relevant representations from data, they typically do not learn representations with the kind of symbolic structure that is hypothesized to support high-level cognitive processes, nor do they naturally model such structures within problem domains that are continuous in space and time. To fill these gaps, this work exploits a method for defining vector representations that bind discrete (symbol-like) entities to points in continuous topological spaces in order to simulate and predict the behavior of a range of dynamical systems. These vector representations are spatial semantic pointers (SSPs), and we demonstrate that they can (1) be used to model dynamical systems involving multiple objects represented in a symbol-like manner and (2) be integrated with deep neural networks to predict the future of physical trajectories. These results help unify what have traditionally appeared to be disparate approaches in machine learning.

[1]  F. Sommer,et al.  A framework for linking computations and rhythm-based timing patterns in neural firing, such as phase precession in hippocampal place cells , 2018 .

[2]  Trevor Bekolay,et al.  Neural representations of compositional structures: representing and manipulating vector spaces with spiking neurons , 2011, Connect. Sci..

[3]  G. Marcus The Algebraic Mind: Integrating Connectionism and Cognitive Science , 2001 .

[4]  Daniel Rasmussen,et al.  NengoDL: Combining Deep Learning and Neuromorphic Modelling Methods , 2018, Neuroinformatics.

[5]  Brent Komer,et al.  Efficient navigation using a scalable, biologically inspired spatial representation , 2020, CogSci.

[6]  Robert F. Hadley The Problem of Rapid Variable Creation , 2009, Neural Computation.

[7]  Sergey Levine,et al.  Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings , 2018, ICML.

[8]  Aaron R. Voelker,et al.  Dynamical Systems in Spiking Neuromorphic Hardware , 2019 .

[9]  Feng-Xuan Choo,et al.  The Ordinal Serial Encoding Model: Serial Memory in Spiking Neurons , 2010 .

[10]  Peter Földiák,et al.  SPARSE CODING IN THE PRIMATE CORTEX , 2002 .

[11]  Chris Eliasmith,et al.  A Spiking Independent Accumulator Model for Winner-Take-All Computation , 2017, CogSci.

[12]  Brent Komer,et al.  Biologically Inspired Spatial Representation , 2020 .

[13]  C. Eliasmith,et al.  Accurate representation for spatial cognition using grid cells , 2020, CogSci.

[14]  Chris Eliasmith,et al.  Vector-Derived Transformation Binding: An Improved Binding Operation for Deep Symbol-Like Processing in Neural Networks , 2019, Neural Computation.

[15]  Emilio Kropff,et al.  Place cells, grid cells, and the brain's spatial representation system. , 2008, Annual review of neuroscience.

[16]  Chris Eliasmith,et al.  CUE: A unified spiking neuron model of short-term and long-term memory. , 2020, Psychological review.

[17]  Ross W. Gayler Vector Symbolic Architectures answer Jackendoff's challenges for cognitive neuroscience , 2004, ArXiv.

[18]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[19]  Friedrich T. Sommer,et al.  Variable Binding for Sparse Distributed Representations: Theory and Applications , 2020, IEEE Transactions on Neural Networks and Learning Systems.

[20]  Terrence C. Stewart,et al.  A neural representation of continuous space using fractional binding , 2019, CogSci.

[21]  Tony A. Plate,et al.  Holographic Reduced Representation: Distributed Representation for Cognitive Structures , 2003 .

[22]  Pentti Kanerva,et al.  Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors , 2009, Cognitive Computation.

[23]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[24]  Surya Ganguli,et al.  A unified theory for the origin of grid cells through the lens of pattern formation , 2019, NeurIPS.

[25]  Feng-Xuan Choo,et al.  Spaun 2.0: Extending the World’s Largest Functional Brain Model , 2018 .

[26]  Chris Eliasmith,et al.  Representing spatial relations with fractional binding , 2019, CogSci.

[27]  Chris Eliasmith,et al.  A Controlled Attractor Network Model of Path Integration in the Rat , 2005, Journal of Computational Neuroscience.

[28]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[29]  Paul Smolensky,et al.  Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems , 1990, Artif. Intell..

[30]  Paul Thagard,et al.  Concepts as Semantic Pointers: A Framework and Computational Model , 2016, Cogn. Sci..

[31]  Trevor Bekolay,et al.  A Large-Scale Model of the Functioning Brain , 2012, Science.

[32]  Mervin E. Muller,et al.  A note on a method for generating points uniformly on n-dimensional spheres , 1959, CACM.

[33]  G. Julia Mémoire sur l'itération des fonctions rationnelles , 1918 .

[34]  Terrence C. Stewart,et al.  Sentence processing in spiking neurons: A biologically plausible left-corner parser , 2014, CogSci.

[35]  Terrence C. Stewart,et al.  A biologically realistic cleanup memory: Autoassociation in spiking neurons , 2011, Cognitive Systems Research.

[36]  Chris Eliasmith,et al.  Biologically Plausible, Human-scale Knowledge Representation , 2016, CogSci.

[37]  G. Marcus Rethinking Eliminative Connectionism , 1998, Cognitive Psychology.

[38]  Peer Neubert,et al.  A comparison of vector symbolic architectures , 2020, Artificial Intelligence Review.

[39]  Zenon W. Pylyshyn,et al.  Connectionism and cognitive architecture: A critical analysis , 1988, Cognition.

[40]  Joe Pater The harmonic mind : from neural computation to optimality-theoretic grammar , 2009 .

[41]  Chris Eliasmith,et al.  A Neural Model of Rule Generation in Inductive Reasoning , 2011, Top. Cogn. Sci..

[42]  Chris Eliasmith,et al.  Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks , 2019, NeurIPS.

[43]  Jörg Conradt,et al.  Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information , 2020, 2020 International Joint Conference on Neural Networks (IJCNN).

[44]  James L. McClelland,et al.  Letting structure emerge: connectionist and dynamical systems approaches to cognition , 2010, Trends in Cognitive Sciences.

[45]  Aaron R. Voelker A short letter on the dot product between rotated Fourier transforms , 2020, ArXiv.

[46]  Chris Eliasmith,et al.  How to Build a Brain: A Neural Architecture for Biological Cognition , 2013 .

[47]  Chris Eliasmith,et al.  A Unified Approach to Building and Controlling Spiking Attractor Networks , 2005, Neural Computation.

[48]  Chris Eliasmith,et al.  Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems , 2004, IEEE Transactions on Neural Networks.