Does anchoring cause overconfidence only in experts?

Does anchoring cause overconfidence only in experts? Belinda Bruza, Matthew B. Welsh, Daniel J. Navarro & Stephen H. Begg ({belinda.bruza, matthew.welsh, daniel.navarro, steve.begg}@adelaide.edu.au) The University of Adelaide, SA 5005, Australia Abstract The anchoring-and-adjustment heuristic (Tversky & Kahneman, 1974) predicts elicitation of an initial estimate will prompt subsequent minimum and maximum estimates to lie close to the initial estimate, resulting in narrow ranges and overconfidence. Evidence for this, however, is mixed; while Heywood-Smith, Welsh & Begg (2008) observed narrower subsequent ranges, Block and Harper (1991) report ranges became wider. One suggestion has been that this reflects a difference between expert and novice reactions to elicitation tasks. The present study investigated whether the interplay between expertise and number preferences leads to the paradoxical effects of an initial estimate. Participants with high expertise make precise estimates whereas participants with less expertise prefer rounded numbers, which could, potentially, reduce the impact of anchors. We confirm that expertise affects the precision of estimates and observe results indicative of the theorized effect – an interaction between expertise and elicitation method on range widths. Keywords: anchoring; overconfidence; number preference; precision In fields where empirical data is limited or unavailable, decisions are often based on expert judgment. For example, current industry practice in petroleum exploration requires exploration geologists to provide 80% confidence ranges on relevant factors (e.g., rock porosity, reservoir thickness) prior to drilling (Hawkins, Coopersmith, & Cunningham, 2002). A typical result, however, is overconfidence (Lichtenstein, Fischhoff, & Phillips, 1982), where the level of confidence reported is much higher than the proportion of ranges containing the true value. This bias has been observed not only in oil and gas industry personnel (Welsh, Bratvold, & Begg, 2005), but in a multiplicity of experts including clinicians (Christensen-Szalanski & Bushyhead, 1981), business managers (Russo & Schoemaker, 1992) and social scientists (Tetlock, 1999). Theoretical interest in factors affecting overconfidence is therefore shared by technical and psychological disciplines alike. A popular explanation for overconfidence stems from the anchoring-and-adjustment heuristic, first suggested by Tversky and Kahneman (1974): people start from an initial value, an anchor, which they insufficiently adjust from to provide a range. While this anchoring-and-adjustment explanation has received support (Russo & Schoemaker, 1992; Heywood-Smith, Welsh & Begg 2008), several studies found that requesting a best initial estimate resulted in wider ranges, that is, reduced overconfidence (see, e.g., Block & Harper, 1991; Clemen 2001; Juslin, Wennerholm and Olsson, 1999; Soll & Klayman, 2004; Winman, Hansson, & Juslin, 2004). Yaniv and Foster (1995) theorized there is a trade-off between accuracy and informativeness in uncertain judgment tasks. The precision or “graininess” in estimates is used to convey confidence. On the aforementioned calibration task, for example, an individual uncertain of their knowledge should produce a wide, less precise range to represent uncertainty. However, although wider ranges are more likely to encompass the true value, as estimates become less precise (i.e., “grainier”), they also become less informative of the true value. There is a possibility that, in order to boost informativeness, experts in a topic are more inclined to generate precise estimates than laypeople. Should this indeed be the case, such a difference in number preference may help clarify the relationship between anchoring and overconfidence. Such number preferences could place limits on the minimum width of a range that vary by elicitation method. For example, an individual who prefers to give estimates in multiples of 100 (to characterize their uncertainty about the true values) may generate a range of 100-200. If requested to provide an initial best guess, using the same scale this person would estimate either 100 (prompting a wider range of 0-200) or 200 (range: 100-300). The wider range resulting from this preference for round numbers would therefore remove any anchoring effect the initial best guess had on the end-points (and, thereby, reduce overconfidence). Where uncertainty is high and precision low, this effect may be sufficient to overwhelm any anchoring effect resulting from the best guess. In contrast, an expert’s tendency to produce precise estimates (i.e., fewer trailing zeros) will reduce or avoid this effect and thus any effect of anchoring resulting from the best guess will be observable. Research Aims The aim of this study is to investigate the effect an initial best guess of a true value has on the width of elicited ranges at different gradations of expertise. It was hypothesized that individuals with less expertise would prefer to report estimates in rounded numbers. A best guess would be made as, for example, a multiple of 10. Subsequent adjustment from this anchor would be made on the same scale to obtain minimum and maximum estimates, thereby reducing the impact of anchoring. Conversely, highly expert individuals would report precise estimates. Anchoring on the best guess would therefore be more apparent as adjustments for ranges are made on a smaller scale.

[1]  Dean P. Foster,et al.  Graininess of judgment under uncertainty: An accuracy-informativeness trade-off , 1995 .

[2]  Daniel J. Navarro,et al.  Number Preference, Precision and Implicit Confidence , 2011, CogSci.

[3]  Richard A. Block,et al.  Overconfidence in estimation: Testing the anchoring-and-adjustment hypothesis , 1991 .

[4]  B. Fischhoff,et al.  Calibration of probabilities: the state of the art to 1980 , 1982 .

[5]  Joshua Klayman,et al.  Overconfidence in interval estimates. , 2004, Journal of experimental psychology. Learning, memory, and cognition.

[6]  Reidar Brumer Bratvold,et al.  Cognitive Biases in the Petroleum Industry: Impact and Remediation , 2005 .

[7]  R. Iman,et al.  Rank Transformations as a Bridge between Parametric and Nonparametric Statistics , 1981 .

[8]  Anders Winman,et al.  Subjective probability intervals: how to reduce overconfidence by interval evaluation. , 2004, Journal of experimental psychology. Learning, memory, and cognition.

[9]  James Algina,et al.  A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes. , 2008, Psychological methods.

[10]  Philip E. Tetlock,et al.  Theory-Driven Reasoning About Plausible Pasts and Probable Futures in World Politics: Are We Prisoners of Our Preconceptions? , 1999 .

[11]  P. Tetlock Heuristics and Biases: Theory-Driven Reasoning about Plausible Pasts and Probable Futures in World Politics , 2002 .

[12]  Padraic Monaghan,et al.  Proceedings of the 23rd annual conference of the cognitive science society , 2001 .

[13]  P. Juslin,et al.  Format dependence in subjective probability calibration , 1999 .

[14]  David M Erceg-Hurn,et al.  Modern robust statistical methods: an easy way to maximize the accuracy and power of your research. , 2008, The American psychologist.

[15]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[16]  Baruch Fischhoff,et al.  Calibration of Probabilities: The State of the Art , 1977 .

[17]  J. T. Hawkins,et al.  Improving Stochastic Evaluations Using Objective Data Analysis and Expert Interviewing Techniques , 2002 .

[18]  Jay J.J. Christensen-Szalanski,et al.  Physicians' use of probabilistic information in a real clinical setting. , 1981 .