A Bayesian approach for calibrating probability judgments
暂无分享,去创建一个
Eliciting experts’ opinions has been one of the main alternatives for addressing paucity of data. In the vanguard of this area is the development of calibration models (CMs). CMs are models dedicated to overcome miscalibration, i.e. judgment biases reflecting deficient strategies of reasoning adopted by the expert when inferring about an unknown. One of the main challenges of CMs is to determine how and when to intervene against miscalibration, in order to enhance the tradeoff between costs (time spent with calibration processes) and accuracy of the resulting models. The current paper dedicates special attention to this issue by presenting a dynamic Bayesian framework for monitoring, diagnosing, and handling miscalibration patterns. The framework is based on Beta-, Uniform, or Triangular-Bernoulli models and classes of judgmental calibration theories. Issues regarding the usefulness of the proposed framework are discussed and illustrated via simulation studies.
[1] Jeremy E. Oakley,et al. Uncertain Judgements: Eliciting Experts' Probabilities , 2006 .
[2] R. Cooke. Experts in Uncertainty: Opinion and Subjective Probability in Science , 1991 .