Computational complexity analysis for cognitive scientists

Computational complexity analysis for cognitive scientists Iris van Rooij [contact person], Johan Kwisthout, Mark Blokpoel Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour Montessorilaan 3, 6525 HR Nijmegen, The Netherlands, i.vanrooij@donders.ru.nl Todd Wareham Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL, Canada Keywords: computational complexity theory, computational modeling, intractability, NP-hard, scalability, algorithms, fixed- parameter tractability, approximation. computational complexity analysis, and (c) learn about the philosophical foundations of, and debate surrounding, the use of computational complexity theory for analyzing computational-level theories of cognition. The tutorial will assume a basic level knowledge of cognitive psychology and an affinity with computational considerations. Aims and Motivation Many computational- or rational-level models of cognition postulate computations that appear to be computationally intractable (e.g., NP-hard or worse). Formally, this means that the postulated computations consume an exponential amount of time. Informally, this means that the postulated computations do not scale in any obvious way to explain how the modeled cognitive capacities can operate in the real world outside the lab. This problem of intractability is quite common in cognitive science. It is observed in practically all domains of cognition, including, for instance, perception, language, reasoning, categorization, decision-making, and motor planning. It is also not specific to any particular class of models, as it can arise for symbolic, connectionist, probabilistic (e.g. Bayesian), dynamical, logic-based, and even heuristic models of cognition. How can cognitive scientists effectively deal with the intractability of their models? Several sophisticated and well-established concepts and techniques for computational complexity analysis have been developed in theoretical computer science over the last decades that can be directly utilized by cognitive scientists. Using these techniques cognitive scientists not only can assess whether or not a particular model is intractable, but also identify parameters of the model that are responsible for that intractability. As a result, these techniques can be used to generate hypotheses about how the models can be revised so as to make them computationally tractable, thereby improving the computational plausibility and scalability of the models. With this tutorial we aim to make these techniques for computational complexity analysis available for interested cognitive scientists. Morning session In the morning session, participants will learn about the conceptual and mathematical foundations of computational complexity analysis in the context of cognitive modeling. The session will include a conceptual primer on several complexity-theoretic concepts (e.g., NP-hard, fixed- parameter tractability) and techniques (e.g., polynomial- time and parameterized reduction). All these notions and techniques are also explained in: van Rooij, I. (2008). The tractable cognition thesis. Cognitive Science, 32, 939-984. Participants are kindly requested to read this paper prior to attending the tutorial. During the morning session, participants will have the opportunity to practice the described techniques via hands-on exercises (these can be done using paper and pencil). The lecturers will use an interactive style of instruction to help participants work through the exercises. Afternoon session In the afternoon session, we will illustrate the broad applicability of the methodology. Wareham will guide participants through a detailed analysis of a model of analogy derivation based on Structure-Mapping Theory (van Rooij, Evans, Muller, Gedge, & Wareham, 2008). Blokpoel will do the same for a Bayesian model of action understanding (Blokpoel, Kwisthout, van der Weide, & van Rooij, 2010). Through interactive exercises, participants can see why both models are NP-hard and which parameters cause this intractability. We then consider the important topic of approximation as an approach to dealing with intractability, with a focus on approximating Bayesian inferences. Kwisthout will present various notions of approximation and illustrate novel results on how constraining particular parameters of probability distributions may make approximation Bayesian strategies (like sampling or local search) successful. Our intent here is Audience The target audience of this tutorial consists of post-graduate students and researchers in any subfield of cognitive science who wish to: (a) achieve an introductory level understanding of the basic concepts underlying computational complexity analysis, (b) gain hands-on experience with some of the basic proof techniques in