Synthetic Reasoning and the Reverse Engineering of Boolean Circuits

Synthetic Reasoning and the Reverse Engineering of Boolean Circuits N. Y. Louis Lee (ngarlee@princeton.edu) Department of Psychology, Princeton University Princeton, NJ 08544-1010 USA P. N. Johnson-Laird (phil@princeton.edu) Department of Psychology, Princeton University Princeton, NJ 08544-1010 USA Abstract In synthetic reasoning, individuals assemble elementary components into effective systems, such as the working mechanism of an unknown device. This paper proposes a new theory of this ability, and reports two experiments investigating how individuals reverse engineer Boolean circuits with two inputs and an output. Experiment 1 supported the theory’s prediction that the complexity, and hence difficulty, of synthetic reasoning problems should depend on the number of possibilities in which the assembled system works, the number of components in that system, and the relations between the component parts. Experiment 2 generalized this finding, and showed that individuals develop two distinct strategies. Introduction Synthetic reasoning is a sequence of mental steps that individuals follow in assembling elementary components into an effective system. When you explain an everyday event, you synthesize your existing causal knowledge with new information in order to explain the event. When you figure out how a device works, you infer from the functions of each of the device’s components the overall mechanism. Synthetic reasoning calls for both deduction and induction, especially the form of induction that generates explanations, i.e., “abduction”. It occurs both in daily life and science. But, how do people do it? Cognitive scientists have investigated a variety of aspects of synthetic reasoning in both psychology and artificial intelligence (e.g., Johnson & Krems, 2001). Klahr and colleagues have studied how individuals discover the function of a control on a toy robot (see, e.g., Klahr & Dunbar, 1988; Klahr, 2000). The participants write programs that control the robot, to try to discover the function of the control. The main finding was that individuals differ in whether they focus on hypotheses about the control or on possible experiments. AI researchers have proposed accounts of ‘abductive’ reasoning in which individuals generate explanations (for a review, see Paul, 1993). These accounts, however, presuppose a pre- existing set of putative explanations, i.e., they have finessed the problem of how individuals use knowledge to synthesize explanations. For example, the ‘set-cover’ approach selects subsets of existing hypotheses, e.g., Allemang, Tanner, Bylander, & Josephson (1987). Similarly, the ‘explanatory- coherence’ account relies on a handcrafted connectionist model that represents competing hypotheses, e.g., Thagard Hence, despite a sizable literature in explanatory reasoning and abduction, the underlying mental processes of synthetic reasoning remain largely unknown. We therefore formulated a theory of synthetic reasoning, and carried out two experiments to test it. The next section describes our theory and illustrates our test-bed of Boolean systems. A Boolean system, such as an electrical circuit of switches, has a “logic” equivalent to negation, conjunction, and disjunction. This logic also applies to concepts (e.g., Shepard, Hovland, & Jenkins, 1961), to sentential connectives (e.g., Johnson-Laird, Byrne, & Schaeken, 1992), and to learning algorithms in artificial intelligence (e.g., Kearns & Vazirani, 1994). No-one knows for certain what makes Boolean problems difficult. Our theory, however, makes clear predictions about their difficulty. A Theory of Synthetic Reasoning In order to construct a working model of a system, you need to understand what the system does and how its components work. Our theory postulates that individuals construct mental models of systems, i.e., representations in which the structure of the model corresponds to the structure of the system (Gentner & Stevens, 1983; Johnson-Laird, 2001). But, how do individuals construct such a model? Like any sort of thinking – with the possible exception of mental arithmetic – the process of synthetic reasoning has to be treated as non- deterministic (Hopcroft & Ullman, 1979). As in deductive reasoning (van der Henst, Yang, & Johnson-Laird, 2002) and problem solving (Lee & Johnson-Laird, 2004), reasoners should develop different strategies as they learn to synthesize systems of the same sort. There are two main sorts of strategies that they are likely to develop: they may focus one at a time on the possibilities in which the system either does or does not produce an output, or they may focus on each of the input components one at a time and try to account for its effects on the output. To grasp the difference between the two strategies, consider the following problem in which individuals have to assemble an electrical circuit containing two binary switches, a battery, a light bulb, and some wires. In this circuit, the light comes on when one or both of the switches are up. Thus, the circuit has four different possibilities:

[1]  Philipp Slusallek,et al.  Introduction to real-time ray tracing , 2005, SIGGRAPH Courses.

[2]  Gabriele Paul,et al.  Approaches to abductive reasoning: an overview , 1993, Artificial Intelligence Review.

[3]  Creative strategies in problem solving , 2004 .

[4]  Howard Greisdorf,et al.  Exploring Science: The Cognition and Development of Discovery Processes , 2003, J. Documentation.

[5]  P. Thagard,et al.  Coherence in Thought and Action , 2000 .

[6]  Richard Reviewer-Granger Unified Theories of Cognition , 1991, Journal of Cognitive Neuroscience.

[7]  S. Phillips,et al.  Processing capacity defined by relational complexity: implications for comparative, developmental, and cognitive psychology. , 1998, The Behavioral and brain sciences.

[8]  Dean Allemang,et al.  Computational Complexity of Hypothesis Assembly , 1987, IJCAI.

[9]  David Klahr,et al.  Dual Space Search During Scientific Reasoning , 1988, Cogn. Sci..

[10]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[11]  Jacob Feldman,et al.  Minimization of Boolean complexity in human concept learning , 2000, Nature.

[12]  Jeffrey D. Ullman,et al.  Formal languages and their relation to automata , 1969, Addison-Wesley series in computer science and information processing.

[13]  Allen Newell,et al.  Human Problem Solving. , 1973 .

[14]  Philip N. Johnson-Laird,et al.  Strategies in sentential reasoning , 2002, Cogn. Sci..

[15]  G. Kane Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol 1: Foundations, vol 2: Psychological and Biological Models , 1994 .

[16]  P. Johnson-Laird,et al.  Illusory inferences: a novel class of erroneous deductions , 1999, Cognition.

[17]  R. Shepard,et al.  Learning and memorization of classifications. , 1961 .

[18]  P. Johnson-Laird Mental models and deduction , 2001, Trends in Cognitive Sciences.

[19]  P. Johnson-Laird,et al.  Propositional reasoning by model. , 1992, Psychological review.

[20]  Todd R. Johnson,et al.  Use of current explanations in multicausal abductive reasoning , 2001, Cogn. Sci..

[21]  S. Ohlsson,et al.  Constraint relaxation and chunk decomposition in insight problem solving , 1999 .

[22]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[23]  T. Ormerod,et al.  Dynamics and constraints in insight problem solving. , 2002, Journal of experimental psychology. Learning, memory, and cognition.

[24]  Umesh V. Vazirani,et al.  An Introduction to Computational Learning Theory , 1994 .