Multiple Experts vs. Multiple Methods: Combining Correlation Assessments

Averaging forecasts from several experts has been shown to lead to improved forecasting accuracy and reduced risk of bad forecasts. Similarly, it is accepted knowledge in decision analysis that an expert can benefit from using more than one assessment method to look at a situation from different viewpoints. In this paper, we investigate gains in accuracy in assessing correlations by averaging different assessments from a single expert and/or from multiple experts. Adding experts and adding methods can both improve accuracy, with diminishing returns to extra experts or methods. The gains are generally much greater from adding experts than from adding methods, and restricting the set of experts to those who do particularly well individually leads to the greatest improvements in the averaged assessments. The variability of assessment accuracy decreases considerably as the number of experts or methods increases, implying a large risk reduction. We discuss conditions under which the general pattern of results obtained here might be expected to be similar or different in other situations with multiple experts and/or multiple methods.

[1]  R. L. Winkler Combining Probability Distributions from Dependent Information Sources , 1981 .

[2]  R. L. Winkler,et al.  Averages of Forecasts: Some Empirical Results , 1983 .

[3]  Robert L. Winkler,et al.  Limits for the Precision and Value of Information from Dependent Sources , 1985, Oper. Res..

[4]  W. Edwards,et al.  Decision Analysis and Behavioral Research , 1986 .

[5]  R. L. Winkler,et al.  Combining Economic Forecasts , 1986 .

[6]  Dennis V. Lindley,et al.  The Reconciliation of Decision Analyses , 1986, Oper. Res..

[7]  David Lindley,et al.  Plural analysis: Multiple approaches to quantitative research , 1986 .

[8]  R. Clemen Combining forecasts: A review and annotated bibliography , 1989 .

[9]  Robert T. Clemen,et al.  The use of probability elicitation in the high-level nuclear waste regulation program , 1995 .

[10]  M. G. Morgan,et al.  Subjective judgments by climate experts. , 1995 .

[11]  A. H. Murphy,et al.  Scoring rules and the evaluation of probabilities , 1996 .

[12]  Robert L. Winkler,et al.  Combining Probability Distributions From Experts in Risk Analysis , 1999 .

[13]  Jack B. Soll Intuitive Theories of Information: Beliefs about the Value of Redundancy , 1999, Cognitive Psychology.

[14]  H Gu,et al.  The effects of averaging subjective probability estimates between and within judges. , 2000, Journal of experimental psychology. Applied.

[15]  Robert L. Winkler,et al.  Assessing Dependence: Some Experimental Results , 2000 .