Structured cognitive representations and complex inference in neural systems

Structured Cognitive Representations and Complex Inference in Neural Systems Samuel J. Gershman, Joshua B. Tenenbaum ({sjgershm,jbt}@mit.edu) Department of Brain and Cognitive Sciences, MIT Cambridge, MA 02139 USA Alexandre Pouget (Alexandre.Pouget@unige.ch) Department of Neuroscience, University of Geneva CH-1211 Geneva 4 Switzerland Matthew Botvinick (matthewb@princeton.edu) Department of Psychology, Princeton University Princeton, NJ 08540 USA Peter Dayan (dayan@gatsby.ucl.ac.uk) Gatsby Computational Neuroscience Unit, University College London London WC1N 3AR United Kingdom This symposium addresses the question: how do neural cir- cuits acquire and compute with structured representations? This question is examined from a number of angles. Ger- shman introduces the basic issues and discusses attempts to articulate a neurally plausible theory of structured cogni- tion. Pouget describes recent work on implementing com- plex probabilistic computations in neural circuits. Botvinick shows how neural circuits can be used to discover hierarchical task structure in the environment. Finally, Dayan discusses work on wedding richly structured models of semantics with representations of individual episodes. Each talk will be 20 minutes long, followed by a 20 minute panel discussion with speakers moderated by Tenenbaum. Keywords: Bayesian models, rational analysis, perception, ol- faction, memory Summary The dream of cognitive neuroscience has always been a seam- less integration of cognitive representations with neural ma- chinery, but—despite decades of work—fundamental gaps remain. Part of the problem is that many contemporary the- ories of cognition are formulated in terms of representations and computations that are quite different from those used in computational neuroscience. Bridging this gap requires more than simply a translation between theoretical concepts in the two fields; what is needed is a more radical updating of neu- roscience’s theoretical vocabulary. What should this vocabulary look like? Some important features of representations and computations used in contem- porary cognitive theories are: Gershman: from knowledge to neurons How can neurons express the structured knowledge represen- tations central to intelligence? This problem has been at- tacked many times from various angles. I discuss the history of these attempts and situate our current understanding of the problem. I then outline a new approach based on the idea of compressing structured knowledge using neurons in a way that supports probabilistic inference. I illustrate this approach using examples from motion perception and value-based de- cision making. • Compositional, recursive and relational representations (Fodor, 1975; Smolensky, 1990; Hummel & Holyoak, 2003; Stewart et al., 2011). • Flexible use of different structural forms (e.g., taxonomic vs. causal knowledge; Kemp & Tenenbaum, 2009). • Multiple levels of abstraction (Tenenbaum et al., 2011). Pouget: modeling the neural basis of complex intractable inference • Knowledge partitioning / clustering (Lewandowsky & Kirsner, 2000). It is becoming increasingly clear that neural computation can be formalized as a form of probabilistic inference. Several hypotheses have emerged regarding the neural basis of these inferences, including one based on a type of code known as probabilistic population codes or PPCs (Ma et al., 2006). PPCs have been used to model simple forms for multisensory integration, attentional search, perceptual decision making or causal inference, for which human subjects have been shown to be nearly optimal. However, most inferences performed by the brain are too complex be solved optimally in a reasonable • Complex intuitive theories (e.g., naive physics, theory of mind; Carey, 2009). • Algorithms that operate on these representations (e.g., dy- namic programming, Monte Carlo methods; Griffiths et al., These representations and computations are “structured” in the sense that they incorporate rich domain knowledge and strong constraints (Tenenbaum et al., 2011).