mReactr: A computational theory of deductive reasoning - eScholarship

mReactr: A computational theory of deductive reasoning Sangeet Khemlani and J. Gregory Trafton khemlani@aic.nrl.navy.mil, trafton@itd.nrl.navy.mil Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory, Washington, DC 20375 USA Abstract simulations, or mental models, is the fundamental intuition behind the mental model theory of reasoning (Johnson- Laird, 1983). In the present paper, we outline the theory and address one of its major limitations, namely its inability to explain how models are stored and manipulated in memory. We describe a computational implementation of the theory that is embedded within the ACT-R cognitive architecture (Anderson, Bothell, Byrne, Douglass, Lebiere, & Qin, 2004), and we show how the memory-handling mechanisms of the architecture can be leveraged to store and handle mental models efficiently. The mReactr system is a computational implementation of the mental model theory of reasoning (Johnson-Laird, 1983) that is embedded within the ACT-R cognitive architecture (Anderson, 1990). We show how the memory-handling mechanisms of the architecture can be leveraged to store and handle discrete representations of possibilities, i.e., mental models, efficiently. Namely, the iconic representation of a mental model can be distributed, in which each component of a model is represented by a “chunk” in ACT-R’s declarative memory. Those chunks can be merged to create minimal mental models, i.e., reduced representations that do not contain redundant information. Minimal models can then be modified and inspected rapidly. We describe three separate versions of the mReactr software that minimize models at different stages of the system’s inferential processes. Only one of the versions provides an acceptable model of data from an immediate inference task. The resulting system suggests that reasoners minimize mental models only when they initiate deliberative mental processes such as a search for alternative models. Reasoning and mental models The “model” theory of reasoning proposes that when individuals comprehend discourse, they construct mental models of the possibilities consistent with the meaning of the discourse (Johnson-Laird, 2006). The theory depends on three main principles: 1) Individuals use a representation of the meaning of a premise and their knowledge to construct mental models of the various possibilities to which the premises refer. 2) The structure of a model corresponds to the structure of what it represents (see Peirce, 1931-1958, Vol. 4), and so mental models are iconic insofar as possible. 3) The more models a reasoner has to keep in mind, the harder an inference is. On a model-based account, a conclusion is necessary if it holds in all the models of the premises and possible if it holds in at least one model of the premises. mReasoner (Khemlani, Lotstein, & Johnson-Laird, under review) is a unified computational implementation of the mental model theory of reasoning. It implements two interoperating systems for reasoning: Keywords: reasoning, mental models, immediate inferences, mReactr, ACT-R Introduction People regularly make complex deductive inferences. For instance, if you know that none of the lawyers in the room are men, you might refrain from asking any of the men in the room for legal advice. If so, you have made an “immediate” inference from a single premise: 1. None of the lawyers are men. Therefore, none of the men are lawyers. The inference is valid because its conclusion must be true given that its premise is true (Jeffrey, 1981, p. 1). You likely followed up the deductive inference above with an inductive inference: a) 2. None of the men are lawyers. Therefore, they do not possess legal knowledge. An intuitive system (system 1) for building an initial mental model and drawing rapid inferences from that model b) A deliberative system (system 2) for more powerful recursive processes that search for alternative models. This system can manipulate and update the initial model created in system 1, and it can modify conclusions The second inference is inductive – the conclusion is not necessary given the truth of the premise. How do reasoners make deductive and inductive inferences like the ones above? One prominent answer is that they construct mental simulations of the things they already know or believe. They then manipulate those simulations to obtain information they did not have at the outset. The idea that reasoning depends on building The system is akin to dual-process models of reasoning (see, e.g., Evans, 2003, 2007, 2008; Johnson-Laird, 1983, Ch. 6; Kahneman, 2011; Sloman, 1996; Stanovich, 1999; Verschueren, Schaeken, & d’Ydewalle, 2005). Below, we describe the various processes that each system implements.