The Cognitive Reflection Test: how much more than Numerical Ability?

The Cognitive Reflection Test: how much more than Numerical Ability? Matthew B. Welsh, Nicholas R. Burns & Paul H. Delfabbro [{matthew.welsh}; {nicholas.burns}; {paul.delfabbro} @adelaide.edu.au] University of Adelaide, North Terrace Adelaide, SA 5005 Australia Abstract Frederick’s (2005) Cognitive Reflection Test (CRT) is a 3- item task shown to predict susceptibility to decision-making biases better than intelligence measures. It is described as measuring ‘cognitive reflection’ - a metacognitive trait capturing the degree to which people prefer to reflect on answers rather than giving intuitive responses. Herein, we ask how much of the CRT’s success can be explained by assuming it is a test of numerical (rather than general) intelligence. Our results show CRT is closely related to numerical ability and that its predictive power is limited to biases with a numerical basis. Although it may also capture some aspect of a rational cognition decision style, it is unrelated to a metacognitive, error-checking and inhibition measure. We conclude that the predictive power of the CRT can, largely, be explained via numerical ability without the need to posit a separate ‘cognitive reflection’ trait. Keywords: cognitive reflection; heuristics and biases; individual differences; numerical ability; intelligence. Introduction Frederick’s (2005) Cognitive Reflection Task (CRT) asks people to solve three, mathematically-simple problems on which intuitive answers are wrong. Frederick explains CRT performance as reflecting a person’s preference for using either System 1 (intuitive) or System 2 (rational) processes (Stanovich & West, 2000). Given the ease with which one can check whether intuitive answers are incorrect, the score on CRT shows how likely a person is to reflect on their answer rather than respond intuitively. Frederick’s (2005) data shows that CRT is superior to intelligence measures in predicting susceptibility to various cognitive biases or errors made due to inherent, cognitive processes (see, e.g., Tversky & Kaheman, 1974); a conclusion supported by Toplak, West and Stanovich’s (2011) recent work. Given the surprising finding – that a 3-item test better predicts decision-making ability than intelligence tests, Frederick’s work has been influential (cited over 600 times). Its results, however, are in line with previous findings which show that, while intelligence is useful in predicting some decision-making biases, in other cases intelligence and bias susceptibility seem independent (Stanovich & West, 2008). These findings have led to suggestions that decision style (or a person’s preference for thinking rationally or intuitively) may be more important than intelligence for predicting bias susceptibility. CRT shares variance with a number of decision style measures (Frederick, 2005) and ‘cognitive reflection’ is thought to be central to the meta- cognitive processes underlying the relationship between System 1 and System 2 thinking. The latter, System 2 processes, inhibit the automatic and frequently incorrect answers generated by System 1 thinking. It is reasonable, then, that intelligence might determine how efficiently a person uses System 2 reasoning but whether they use it may be determined by a separate, metacognitive process, thereby weakening the observed relationship between intelligence and bias susceptibility. A potential criticism of Frederick’s (2005) paper – and other work in this area – however, lies in the choice of intelligence measures. For example, a commonly used intelligence measure is self-reported SAT scores (see, e.g.: Frederick 2005; Stanovich & West, 1998). Another is the Wonderlic Personnel Test (Wonderlic, 1973 – used in Frederick, 2005; and Furnham, Boo & McClelland, 2012). Finally, Toplak et al. (2011), use the Vocabulary and Matrix Reasoning scales from the Wechlser Abbreviated Scale of Intelligence (WASI, Wechsler, 1999). While all of these do measure ‘intelligence’ - and WASI divides this into Verbal and Non-verbal ability - none take into account the current understanding of the hierarchical nature of intelligence described by the Cattell-Horn-Carroll model (see, e.g., McGrew, 2005), which recognizes at least ten, related, cognitive abilities. By focusing on the relationship between general intelligence and bias susceptibility, it is, therefore, possible to underestimate the relevance of specific intelligences to specific biases. A key omission is of numerical ability – Gq or quantitative ability in CHC terms. Given that the CRT, and many decision-making problems, rely on numerical calculation to determine the correct response, it seems strange to report correlations between biases and general intelligence rather than the type of intelligence most likely to influence such tasks. Thus, it seems possible that the low predictive power of intelligence on bias susceptibility results from poor measure selection. The way forward, then, is to incorporate measures of the specific abilities most likely to relate to the biases under consideration – thereby establishing an accurate baseline for the strength of the relationship before positing additional constructs like cognitive reflection. Concerning metacognition, this work has already begun, with Toplak et al. (2011) including measures of metacognitive abilities (e.g., working memory; Baddeley & Hitch, 1974) that seem likely to be implicated in recognizing errors in intuition and thus switching from System 1 to System 2 reasoning. CRT, Heuristics and Biases Given the numerical basis of the CRT questions, a key question is whether it predicts numerical biases better than

[1]  K. McGrew The Cattell-Horn-Carroll Theory of Cognitive Abilities: Past, Present, and Future. , 2005 .

[2]  J. Cacioppo,et al.  The need for cognition. , 1982 .

[3]  M. Bar-Hillel The base-rate fallacy in probability judgments. , 1980 .

[4]  K. Stanovich,et al.  On the relative independence of thinking biases and cognitive ability. , 2008, Journal of personality and social psychology.

[5]  A. Furnham,et al.  Individual Differences and the Susceptibility to the Influence of Anchoring Cues , 2012 .

[6]  James W. Pellegrino,et al.  Training effects and working memory contributions to skill acquisition in a complex coordination task , 1995 .

[7]  A. Odum,et al.  Impulsivity, risk taking, and timing , 2012, Behavioural Processes.

[8]  K. Stanovich,et al.  The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks , 2011, Memory & cognition.

[9]  A. Tversky,et al.  Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment , 1983 .

[10]  Daniel J. Navarro,et al.  Seeing is believing: Priors, trust, and base rate neglect , 2012 .

[11]  John O. Willis,et al.  Wechsler Abbreviated Scale of Intelligence , 2014 .

[12]  A. Odum Delay discounting: Trait variable? , 2011, Behavioural Processes.

[13]  R. Cattell Personality and mood by questionnaire. , 1973 .

[14]  Tomasz Zaleskiewicz,et al.  Beyond risk seeking and risk aversion: personality and the dual nature of economic risk taking , 2001 .

[15]  S. Frederick Journal of Economic Perspectives—Volume 19, Number 4—Fall 2005—Pages 25–42 Cognitive Reflection and Decision Making , 2022 .

[16]  M. D’Esposito Working memory. , 2008, Handbook of clinical neurology.

[17]  K. Stanovich,et al.  Heuristics and Biases: Individual Differences in Reasoning: Implications for the Rationality Debate? , 2002 .

[18]  Nicholas R. Burns,et al.  A speeded coding task using a computer-based mouse response , 2005, Behavior research methods.

[19]  John A. Johnson,et al.  The international personality item pool and the future of public-domain personality measures ☆ , 2006 .

[20]  A. Wesman The DIFFERENTIAL APTITUDE Tests , 1952 .

[21]  A. Tversky,et al.  The framing of decisions and the psychology of choice. , 1981, Science.

[22]  S. Epstein,et al.  Individual differences in intuitive-experiential and analytical-rational thinking styles. , 1996, Journal of personality and social psychology.

[23]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[24]  Andrew M. Parker,et al.  Individual differences in adult decision-making competence. , 2007, Journal of personality and social psychology.

[25]  Dawn P. Flanagan,et al.  Contemporary intellectual assessment : theories, tests, and issues , 1997 .

[26]  S. Begg,et al.  Individual differences in anchoring: Traits and experience , 2014 .

[27]  Keith E. Stanovich,et al.  Individual differences in rational thought. , 1998 .

[28]  T. Q. Irigaray,et al.  Intellectual abilities in Alzheimer's disease patients: Contributions from the Wechsler Abbreviated Scale of Intelligence (WASI) , 2010 .

[29]  I. Robertson,et al.  `Oops!': Performance correlates of everyday attentional failures in traumatic brain injured and normal subjects , 1997, Neuropsychologia.