Scoring-rule feedback and the overconfidence syndrome in subjective probability forecasting

Abstract Previous research has shown that people tend to be overconfident when making subjective probability forecasts. This study tests the hypothesis that scoring-rule-based payoffs and feedback would lead to better probability forecasts. Subjects predicted the freshman grade point averages of 40 other college students using data on the other students' genders, SAT scores, and high school grade point averages. The results of the study were mixed. Trial-by-trial outcome feedback had no effect on performance. Incentive pay based on a truncated logarithmic scoring rule does have an effect. Subjects motivated by scoring-rule-based pay were much less likely to assign very large or very small probabilities and performed much better in terms of the logarithmic scoring rule itself. On the other hand, scoring-rule-based incentives had no effect on the extent to which subjects' forecasts corresponded with forecasts generated by an actuarial Bayesian classification model. The only positive effect of scoring-rule-based incentives was to deter the use of very small probabilities.