Reliability and Coherence of Causal, Diagnostic, and Joint Subjective Probabilities

Probabilistic knowledge is an important input to the analysis of many decisions, and may be required for long-range forecasting, classical decision analysis, influence diagrams, fault tree analysis, and expert systems. If objective probabilities cannot be calculated, it is vital that decision makers use the best available subjective measures of probability. Standard mathematical theory allows the analyst several choices in framing subjective probability assessments. When assessing cause-effect probabilities, the three choices are causal, diagnostic, and joint probabilities. Choosing among these is difficult in light of conflicting reports that, due to cognitive heuristics, probability judgment may be biased in various poorly understood ways. For example, it has been reported that, when judging causal probabilities, people are subject to the causal information bias, and thus upwardly revise their prior probabilities more than when judging diagnostic probabilities. On the other hand, it has also been reported that in some cases, people do not take proper account of new evidence, which results in under-revision of prior probabilities. Furthermore it has been reported that, when assessing joint probabilities, people are subject to the conjunction fallacy, and thus often judge a joint probability to be higher than one of the two corresponding marginal probabilities. Our research compares the relative effects of these biases in a laboratory setting: we present new empirical results comparing the test-retest reliabilities and under(over) revision rates of causal, diagnostic, and joint probabilities. Our results suggest that the tendency to both under-revise and over-revise prior probabilities is greatest when judging diagnostic probabilities, as opposed to either causal or joint probabilities. Furthermore our results suggest that joint probability judgment is more reliable than either causal or diagnostic probability judgment.