A model of calibration for subjective probabilities

Abstract A mathematical model is developed to describe the calibration of discrete subjective probabilities and is compared with published group calibration results and with new data. The model is appropriate to probability assessment tasks, with a variety of formats, that can be considered from a signal detection point of view, such as giving the probability that a particular two-category classification is correct. The model assumes that the respondent partitions the range of a decision variable and maps the set of response probabilities onto it. Such a model can account for the systematic effect of proportion correct on the degree of under- or overconfidence; it indicates the ways in which training can affect calibration; it makes specific predictions about base rate effects; it provides a measure of “knowing what one knows”; and it gives a unifying viewpoint for a large body of experimental work on calibration.