A Bayesian Model of Rule Induction in Raven's Progressive Matrices

A Bayesian Model of Rule Induction in Raven’s Progressive Matrices Daniel R. Little (daniel.little@unimelb.edu.au) School of Psychological Sciences, The University of Melbourne Parkville VIC 3010 Australia Stephan Lewandowsky (stephan.lewandowsky@uwa.edu.au) School of Psychology, The University of Western Australia Crawley WA 6009 Thomas L. Griffiths (tom griffiths@berkeley.edu) Department of Psychology, University of California, Berkeley Berkeley CA 94720-1650 USA Abstract Raven’s Progressive Matrices (Raven, Raven, & Court, 1998) is one of the most prevalent assays of fluid intelligence; how- ever, most theoretical accounts of Raven’s focus on producing models which can generate the correct answer but do not fit hu- man performance data. We provide a computational-level the- ory which interprets rule induction in Raven’s as Bayesian in- ference. The model computes the posterior probability of each rule in the set of possible rule hypotheses based on whether those rules could have generated the features of the objects in the matrix and the prior probability of each rule. Based on fits to both correct and incorrect response options across both the Standard and Advanced Progressive Matrices, we propose sev- eral novel mechanisms that may drive responding to Raven’s items. Keywords: Rule induction, Bayesian inference, Raven’s Pro- gressive Matrices Introduction Raven’s Progressive Matrices (Raven et al., 1998; Raven’s from here on) is one of the most widely used assays of fluid intelligence, and much attention has focused on the under- lying elemental cognitive processes. Raven’s has arguably gathered more attention in the cognitive literature than any other psychometric measure of fluid intelligence, largely be- cause it is an induction task par excellence that can be mod- eled computationally (see e.g., Carpenter, Just, & Shell, 1990; Verguts, De Boeck, & Maris, 2000). For example, Carpenter et al. (1990) presented a production-system model of Raven’s to support a two-factor theory of Raven’s with working mem- ory capacity (WMC) as the first factor and a second factor related to the ability to abstract relations. This latter ability has been associated with several attributes including rule gen- eration speed (Verguts & De Boeck, 2002), inference speed (Rasmussen & Eliasmith, 2011), and analogical comparison (Lovett, Forbus, & Usher, 2010; McGreggor, Kunda, & Goel, These extant models of Raven’s have focused on cognitive processes and mechanisms that underlie the inference of rules from the objects in the matrix. Further insight can be gained by exploring a computational-level analysis (Marr, 1982). As performance in Raven’s relies primarily on rule induction, the task is conducive to instantiation within a Bayesian frame- work. For instance, Bayesian models of rule induction have been successfully applied to similar tasks, such as numerical sequence prediction (i.e., which number follows in the se- quence: 1, 2, 3, 5, 7, 11?; Austerweil & Griffiths, 2011) and rule-based categorization (Goodman, Tenenbaum, Feldman, & Griffiths, 2008). Examining Raven’s within the context of a Bayesian model allows exploration of questions about what people’s priors (or in non-Bayesian terms, inductive bi- ases) might be like for rules of the variety used in the Raven’s test. Finally, the Bayesian formalism provides an extensible framework for using standard extensions to Bayesian models to capture other, more process-based interpretations of fac- tors known to be relevant to performance on Raven’s, such as memory and learning. Here we present a Bayesian model of Raven’s which in- terprets rule induction as Bayesian inference in which a set of rules with some prior probability are evaluated based on their ability to have plausibly generated the features of the items shown in the matrix. Rules are then sampled based on their posterior probability and Bayesian model averaging is used to predict which answers are most likely given the pos- terior distribution. Unlike extant models, which examine how successful the model is at predicting correct responses (e.g., Carpenter et al., 1990; Lovett et al., 2010; McGreggor et al., 2010), our model also makes predictions about the proportion of responses involving the various incorrect options. Bayesian Model of Raven’s Solving a Raven’s problem can be conceptualized as a three- stage process involving feature extraction, rule-inference and prediction. 1 As illustrated in Figure 1, Raven’s items have the following composition: O 11 O 21 O 31 O 12 O 22 O 32 O 13 O 23 where O i j is the object in the i th row and j th column. As- suming the features of each object are extracted successfully, 1 In the present model, we follow Carpenter et al. (1990) by hand- coding the features of the items. Several methods for extracting the features of Raven’s items have been proposed (Lovett et al., 2010; McGreggor et al., 2010; Rasmussen & Eliasmith, 2011).

[1]  J. Raven,et al.  Manual for Raven's progressive matrices and vocabulary scales , 1962 .

[2]  M A Just,et al.  From the SelectedWorks of Marcel Adam Just 1990 What one intelligence test measures : A theoretical account of the processing in the Raven Progressive Matrices Test , 2016 .

[3]  M. W. Molen,et al.  Error analysis of raven test performance , 1994 .

[4]  P. Boeck,et al.  Generation speed in Raven's progressive matrices test , 1999 .

[5]  J. Gustafsson,et al.  Item Sequencing Effects on the Measurement of Fluid Intelligence , 2000 .

[6]  Ricardo Primi,et al.  Complexity of Geometric Inductive Reasoning Tasks: Contribution to the Understanding of Fluid Intelligence. , 2001 .

[7]  Refractor Vision , 2000, The Lancet.

[8]  Tom Verguts,et al.  The induction of solution rules in Raven's Progressive Matrices Test , 2002 .

[9]  Timothy J. Robinson,et al.  Sequential Monte Carlo Methods in Practice , 2003 .

[10]  M. Meo,et al.  Element salience as a predictor of item difficulty for Raven's Progressive Matrices , 2007 .

[11]  Thomas L. Griffiths,et al.  A Rational Analysis of Rule-Based Concept Learning , 2008, Cogn. Sci..

[12]  Maithilee Kunda,et al.  A Fractal Analogy Approach to the Raven's Test of Intelligence , 2010, Visual Representations and Reasoning.

[13]  Adam N Sanborn,et al.  Exemplar models as a mechanism for performing Bayesian inference , 2010, Psychonomic bulletin & review.

[14]  Kenneth D. Forbus,et al.  A Structure-Mapping Model of Raven's Progressive Matrices , 2010 .

[15]  R. Catrambone,et al.  Proceedings of the 32nd Annual Conference of the Cognitive Science Society , 2010 .

[16]  Thomas L. Griffiths,et al.  Learning invariant features using the Transformed Indian Buffet Process , 2010, NIPS.

[17]  Laura E. Matzen,et al.  Recreating Raven’s: Software for systematically generating large numbers of Raven-like matrix problems with normed properties , 2010, Behavior research methods.

[18]  Adam N Sanborn,et al.  Rational approximations to rational models: alternative algorithms for category learning. , 2010, Psychological review.

[19]  Thomas L. Griffiths,et al.  The Indian Buffet Process: An Introduction and Review , 2011, J. Mach. Learn. Res..

[20]  Chris Eliasmith,et al.  A Neural Model of Rule Generation in Inductive Reasoning , 2011, Top. Cogn. Sci..

[21]  Thomas L. Griffiths,et al.  Seeking Confirmation Is Rational for Deterministic Hypotheses , 2011, Cogn. Sci..

[22]  S. Lewandowsky Working memory capacity and categorization: individual differences and modeling. , 2011, Journal of experimental psychology. Learning, memory, and cognition.

[23]  David K. Sewell,et al.  Attention and working memory capacity: insights from blocking, highlighting, and knowledge restructuring. , 2012, Journal of experimental psychology. General.

[24]  Stephan Lewandowsky,et al.  Working memory capacity and fluid abilities: the more difficult the item, the more more is better , 2014, Front. Psychol..