On the Emergence of Analogical Inference

On the Emergence of Analogical Inference Paul H. Thibodeau (pthibod1@stanford.edu) Stephen J. Flusberg (sflus@stanford.edu) Jeremy J. Glick (jjglick@stanford.edu) Daniel A. Sternberg (sternberg@stanford.edu) Department of Psychology, 450 Serra Mall, Bldg. 420 Stanford, CA 94305 USA Abstract What processes and mechanisms underlie analogical reasoning? In recent years, several computational models of analogy have been implemented to explore this question. One feature of many of these models is the assumption that humans possess dedicated analogy-specific cognitive machinery – for instance, a mapping or binding engine. In this paper, we question whether it is necessary to assume the existence of such machinery. We find that at least for some types of analogy, it is not. Instead, some forms of analogical processing emerge naturally and spontaneously from relatively simple, low-level learning mechanisms. We argue that this perspective is consistent with empirical findings from the developmental literature and with recent advances in cognitive neuroscience. Keywords: Analogy; metaphor; relational reasoning; development; connectionism; computational model. Introduction In the past three decades, there has been a growing appreciation for the possibility that analogy lies at the core of human cognition (Gentner, 1983; Hofstadter, 2001; Holyoak, Gentner, & Kokinov, 2001; Penn, Holyoak, & Povinelli, 2008). On this view, it is our ability to understand, produce, and reason with analogies that allows us to create the wonderfully rich and sophisticated intellectual and cultural worlds we inhabit. In an attempt to illuminate the cognitive mechanisms that underlie analogical processing, several detailed computational models have been developed that capture key components of the analogical reasoning process (see French, 2002 for a review). Among the most influential of these models are the Structure Mapping Engine (SME: Falkenhainer, Forbus, & Gentner, 1989), and Learning and Inference with Schemas and Analogies (LISA: Hummel & Holyoak, 1997). These models vary drastically in many ways; however, they share a fundamental commitment to explicitly structured symbolic or hybrid representations (e.g. of objects and relations), together with the existence of a dedicated analogical mapping or binding mechanism that operates over these representations. Indeed, proponents of these approaches argue that analogical inference is beyond the reach of models that lack these properties, including fully distributed connectionist models (e.g. Gentner & Markman, 1993; Holyoak & Hummel, 2000). While the structured approach has successfully captured adult behavior in numerous analogical reasoning tasks (e.g. Markman & Gentner, 1997; Hummel & Holyoak, 1997), it is unclear how this analogy-specific machinery comes to exist in the brain over the course of development. Even developmentally-oriented models such as DORA (Doumas, Hummel, & Sandhofer, 2008), which attempts to learn the structure used by LISA, assume a great deal of analogy- specific cognitive machinery without specifying how this machinery comes to exist in the first place. Here, we address this issue by proposing that some forms of analogical processing may emerge gradually over the course of development through the operation of low-level domain general learning mechanisms (Flusberg, Thibodeau, Sternberg, & Glick, 2010; Leech, Mareschal, & Cooper, 2008). In support of this view we describe a set of simulations carried out using the Rumelhart network (Rumelhart, 1990), a neurally inspired model that has succeeded in capturing many results from the literature on semantic development in children (e.g. Rogers & McClelland, 2004) and whose variants have been used to understand the deterioration of conceptual knowledge in semantic dementia (e.g. Dilkina, McClelland, & Plaut, Simulations Our learning task is inspired by Hinton’s (1986) family tree model, one of the first attempts to address relational learning in a connectionist network. The task of the model is to learn “statements” that are true about the various members of a family, including identity information, perceptual features, and relations between family members. Input to the model consists of activating a Subject unit, corresponding to a particular family member, and a Relation unit. The Relation units correspond to the different kinds of relationships that can hold between subjects and objects (e.g. “is_named”, “parent_of”). The network is wired up in a strictly feed-forward fashion, as shown in Figure 1, such that the input propagates forward through the internal layers, resulting in a set of predictions over the Object layer. Over the course of training, the network’s weights change (via backpropagation of the cross-entropy error on the output units) in order to better predict which Object outputs correspond to each combination of Subject and Relation inputs. As the model also contains intervening layers of units between the input and output layers, it is forced to re-represent the inputs as a distributed pattern of activation over these internal layers.

[1]  D. Gentner Structure‐Mapping: A Theoretical Framework for Analogy* , 1983 .

[2]  Geoffrey E. Hinton,et al.  Learning distributed representations of concepts. , 1989 .

[3]  U. Goswami Analogical reasoning in children , 1993 .

[4]  Dedre Gentner,et al.  The Place of Analogy in Cognition , 2002 .

[5]  Stephen J. Flusberg,et al.  A Connectionist Approach to Embodied Conceptual Metaphor , 2010, Front. Psychology.

[6]  James L. McClelland,et al.  Semantic Cognition: A Parallel Distributed Processing Approach , 2004 .

[7]  Dedre Gentner,et al.  Bootstrapping the Mind: Analogical Processes and Symbol Systems , 2010, Cogn. Sci..

[8]  Daniel C. Krawczyk,et al.  A Neurocomputational Model of Analogical Reasoning and its Breakdown in Frontotemporal Lobar Degeneration , 2004, Journal of Cognitive Neuroscience.

[9]  D. Gentner,et al.  PSYCHOLOGICAL SCIENCE Research Article THE EFFECTS OF ALIGNABILITY ON MEMORY , 2022 .

[10]  R. French The computational modeling of analogy-making , 2002, Trends in Cognitive Sciences.

[11]  John E. Hummel,et al.  Symbolic Versus Associative Learning , 2010, Cogn. Sci..

[12]  Derek C. Penn,et al.  Darwin's mistake: Explaining the discontinuity between human and nonhuman minds , 2008, Behavioral and Brain Sciences.

[13]  D. Gentner,et al.  Relational language and the development of relational mapping , 2005, Cognitive Psychology.

[14]  John E. Hummel,et al.  The Proper Treatment of Symbols in a Connectionist Architecture , 2000 .

[15]  D. Mareschal,et al.  Analogy as relational priming: The challenge of self-reflection , 2008, Behavioral and Brain Sciences.

[16]  Brian F. Bowdle,et al.  Informativity and Asymmetry in Comparisons , 1997, Cognitive Psychology.

[17]  John E. Hummel,et al.  Distributed representations of structure: A theory of analogical access and mapping. , 1997 .

[18]  Dedre Gentner,et al.  Relational Language Helps Children Reason Analogically , 2009 .

[19]  Joseph Jay Williams,et al.  The role of explanation in discovery and generalization: evidence from category learning , 2010, ICLS.

[20]  Dedre Gentner,et al.  Structure-Mapping: A Theoretical Framework for Analogy , 1983, Cogn. Sci..

[21]  James L. McClelland,et al.  A single-system account of semantic and lexical deficits in five semantic dementia patients , 2008, Cognitive neuropsychology.

[22]  Leonidas A A Doumas,et al.  A theory of the discovery and predication of relational concepts. , 2008, Psychological review.

[23]  David E. Rumelhart,et al.  Brain style computation: learning and generalization , 1990 .

[24]  Melody Dye,et al.  The Effects of Feature-Label-Order and Their Implications for Symbolic Learning , 2010, Cogn. Sci..

[25]  Arthur B. Markman,et al.  Analogy - Watershed or Waterloo? Structural Alignment and the Development of Connectionist Models of Cognition , 1992, NIPS.

[26]  Ramon Krosley,et al.  Learning Distributed Representations , 2007 .

[27]  Arthur B. Markman,et al.  Analogy-- Watershed or Waterloo? Structural alignment and the development of connectionist models of analogy , 1992, NIPS 1992.

[28]  Brian Falkenhainer,et al.  The Structure-Mapping Engine: Algorithm and Examples , 1989, Artif. Intell..