Rational Process Models

Rational Process Models Edward Vul (evul@mit.edu), Joshua B. Tenenbaum (jbt@mit.edu) Brain and Cognitive Sciences, 43 Vassar St. Cambridge, MA 02139 USA Thomas L. Griffiths (tom griffiths@berkeley.edu) Roger Levy (rlevy@ucsd.edu) Dept. of Psychology, Tolman Hall Berkeley, CA 94720 USA Dept. of Linguistics, 9500 Gilman Drive La Jolla, CA 92093 USA Mark Steyvers (msteyver@uci.edu) Craig R. M. McKenzie (cmckenzie@ucsd.edu) Dept. of Cognitive Science, 3151 Social Science Pl. Irvine, CA 92697 USA Rady School of Management & Dept. of Psychology, 9500 Gilman Drive La Jolla, CA 92093-0553 USA joyed success in explaining some aspects of human behavior. This symposium proposes an approach that asks the same question at the algorithmic level: “What is the ideal way to implement this inferential computation given constraints on space, time, energy, the scale of problem, etc?” The answer to these problems in Bayesian statistics and machine learning is usually some form of Monte Carlo. Monte Carlo sampling is a method for approximating prob- ability distributions by simulating a stochastic process, with long-run properties reflecting the probability distribution be- ing simulated. Sampling is a general strategy for approxi- mating otherwise intractable statistical inferences with lim- ited resources: This strategy may be applied to any inference problem and is more robust to the size of the problem than other numerical methods. Based on such reverse-engineering considerations, the pan- elists suggest that in a variety of domains (categorization – Griffiths; learning temporal structure – Steyvers; parsing lan- guage – Levy; and multiple object tracking – Vul) people adopt sampling algorithms to approximate optimal inference. One specific suggestion that cuts across the fields and top- ics of the speakers is that instead of representing a full poste- rior distribution, people keep track of a few sampled hypothe- ses. In the sequential tasks considered here, a sample-based representation of the posterior may be updated online with a particle-filtering (sequential Monte Carlo) strategy. Across the different domains and models considered in this sympo- sium, this domain-general algorithm provides a cognitively plausible mechanism for approximating Bayes-optimal com- putations online. What’s most exciting is that these mod- els make contact with (and even extend) the rich empirical paradigms of traditional cognitive psychology and can ac- count for interesting new aspects of human behavior. The panelists in this symposium suggest that instead of producing ad hoc cognitive process models one at a time, one for each task, the development of process models can be guided by reverse-engineering considerations. Through ra- tional analysis of algorithms for approximate Bayesian infer- ence, we can link up Bayesian models with traditional process accounts in cognitive psychology and suggest how Bayesian Keywords: Bayesian modeling; Computational modeling; Process modeling; Algorithms Summary Rational, Bayesian accounts of cognition at the computa- tional level have enjoyed much success in recent years: hu- man behavior is consistent with optimal Bayesian agents in low-level perceptual and motor tasks as well as high level cognitive tasks like category and concept learning, language, and theory of mind. However, two challenges have thus far been ignored by these computational-level models. First, the “process” challenge – Bayesian models often as- sume unbounded cognitive resources available for computa- tion, yet cognitive psychology has emphasized the severe lim- itations imposed on human cognition: How do models at the computational level relate to traditional models from cogni- tive psychology concerned with psychological mechanisms such as memory and attention? The second challenge is the “scaling” problem – research in machine learning and statistics has shown that exact com- putation is intractable for inference problems on the scale relevant to human cognition, indicating that people must be solving these problems approximately: How can Bayesian models of cognition scale up to problems of the size that the mind faces in the real world, beyond the small scales of typi- cal laboratory tasks where these models are usually tested? This symposium brings together researchers from Machine Learning, Cognitive Science, Linguistics, and Psychology, who are working at the interface between the computational and algorithmic levels of description. The overarching theme is a new approach to answering both the “process” and “scal- ing” challenges by rational reverse-engineering of Bayesian algorithmic-level models. Rational or reverse-engineering analyses are by now famil- iar for computational-level questions, where they ask: “what is the ideal inference (or at least, what are rational inferences with good statistical properties) given the available informa- tion and task?” The answer to this computational-level ques- tion can often be described as some form of Bayesian infer- ence, and models derived from these considerations have en-