Confidence depends on level of aggregation

The credible intervals that people set around their point estimates are typically too narrow (cf. Lichtenstein, Fischhoff, & Phillips, 1982). That is, a set of many such intervals does not contain the actual values of the criterion variables as often as it should given the probability assigned to this event for each estimate. The typical interpretation of such data is that people are overconfident about the accuracy of their judgments. This paper presents data from two studies showing the typical levels of overconfidence for individual estimates of unknown quantities. However, data from the same subjects on a different measure of confidence for the same items, their own global assessment for the set of multiple estimates as a whole, showed significantly lower levels of confidence and overconfidence than their average individual assessment for items in the set. It is argued that the event and global assessments of judgment quality are fundamentally different and are affected by unique psychological processes. Finally, we discuss the implications of a difference between confidence in single and multiple estimates for confidence research and theory.

[1]  R. May,et al.  Overconfidence as a result of incomplete and wrong knowledge. , 1986 .

[2]  A. Tversky,et al.  Variants of uncertainty , 1982, Cognition.

[3]  Dane K. Peterson,et al.  Confidence, uncertainty, and the use of information , 1988 .

[4]  Janet A. Sniezek,et al.  Influences on the appropriateness of confidence in judgment: Practice, effort, information, and decision-making , 1991 .

[5]  Arie W. Kruglanski,et al.  What Makes You So Sure? Effects of Epistemic Motivations on judgmental Confidence , 1987 .

[6]  David L. Ronis,et al.  Components of probability judgment accuracy: Individual consistency and effects of subject matter and assessment method. , 1987 .

[7]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[8]  Ralf E. Schaefer,et al.  The assessment of subjective probability distributions: A training experiment☆ , 1973 .

[9]  Janet A. Sniezek,et al.  Revision, Weighting, and commitment in consensus group judgment , 1990 .

[10]  D. A. Seaver,et al.  Eliciting subjective probability distributions on continuous variables , 1978 .

[11]  B. Fischhoff,et al.  I knew it would happen: Remembered probabilities of once—future things , 1975 .

[12]  A. H. Murphy,et al.  Scoring rules in probability assessment and evaluation , 1970 .

[13]  H. Arkes,et al.  Two methods of reducing overconfidence , 1987 .

[14]  Willem A. Wagenaar,et al.  Violation of utility theory in unique and repeated gambles , 1987 .

[15]  Janet A. Sniezek,et al.  The effect of choosing on confidence in choice , 1990 .

[16]  Janet A. Sniezek,et al.  A Comparison of Techniques for Judgmental Forecasting by Groups with Common Information , 1990 .

[17]  B. Fischhoff,et al.  Reasons for confidence. , 1980 .

[18]  Peter Ayton,et al.  Subjective confidence in forecasts: A response to fischhoff and MacGregor , 1986 .

[19]  Ayleen Wisudha,et al.  Distribution of probability assessments for almanac and future event questions , 1982 .

[20]  Rebecca A. Henry,et al.  Accuracy and confidence in group judgment , 1989 .

[21]  Robert S. Billings,et al.  The effects of response mode and importance on decision-making strategies: Judgment versus choice , 1988 .

[22]  George Wright,et al.  Changes in the realism and distribution of probability assessments as a function of question type , 1982 .

[23]  J. A. Sniezek,et al.  Prediction with single event versus aggregate data , 1988 .