Maximum Entropy Aggregation of Expert Predictions

This paper presents a maximum entropy framework for the aggregation of expert opinions where the expert opinions concern the prediction of the outcome of an uncertain event. The event to be predicted and individual predictions rendered are assumed to be discrete random variables. A measure of expert competence is defined using a distance metric between the actual outcome of the event and each expert's predicted outcome. Following Levy and Delic (Levy, W. B., H. Delic. 1994. Maximum entropy aggregation of individual opinions. IEEE Trans. Sys. Man & Cybernetics 24 606--613.), we use Shannon's information measure (Shannon [Shannon, C. E. 1948. A mathematical theory of communication. Bell Syst. Tech. J. 27 379--423.], Jaynes [Jaynes, E. T. 1957. Information theory and statistical mechanics. Phys. Rev. 106 Part I: 620--630, 108 Part II: 171--190.]) to derive aggregation rules for combining two or more expert predictions into a single aggregated prediction that appropriately calibrates different degrees of expert competence and reflects any dependence that may exist among the expert predictions. The resulting maximum entropy aggregated prediction is least prejudiced in the sense that it utilizes all information available but remains maximally non committal with regard to information not available. Numerical examples to illuminate the implications of maximum entropy aggregation are also presented.

[1]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.

[2]  E. Jaynes Information Theory and Statistical Mechanics , 1957 .

[3]  G. Stigler The Economics of Information , 1961, Journal of Political Economy.

[4]  J. Darroch,et al.  Generalized Iterative Scaling for Log-Linear Models , 1972 .

[5]  Peter A. Morris,et al.  Decision Analysis Expert Use , 1974 .

[6]  Peter A. Morris,et al.  Combining Expert Judgments: A Bayesian Approach , 1977 .

[7]  R. Levine,et al.  An Algorithm for Finding the Distribution of Maximal Entropy , 1979 .

[8]  E. T. Jaynes,et al.  Where do we Stand on Maximum Entropy , 1979 .

[9]  Rodney W. Johnson,et al.  Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy , 1980, IEEE Trans. Inf. Theory.

[10]  K. McConway Marginalization and Linear Opinion Pools , 1981 .

[11]  R. L. Winkler Combining Probability Distributions from Dependent Information Sources , 1981 .

[12]  R. Bordley A Multiplicative Formula for Aggregating Probability Assessments , 1982 .

[13]  E. Jaynes On the rationale of maximum-entropy methods , 1982, Proceedings of the IEEE.

[14]  G. W. Hill Group versus individual performance: are n + 1 heads better than one?" psychological bulletin , 1982 .

[15]  Peter A. Morris,et al.  An Axiomatic Approach to Expert Resolution , 1983 .

[16]  C. Genest Pooling operators with the marginalization property , 1984 .

[17]  Robert L. Winkler,et al.  Limits for the Precision and Value of Information from Dependent Sources , 1985, Oper. Res..

[18]  Christian Genest,et al.  Modeling Expert Judgments for Bayesian Updating , 1985 .

[19]  Christian Genest,et al.  Combining Probability Distributions: A Critique and an Annotated Bibliography , 1986 .

[20]  W. Levy A computational approach to hippocampal function , 1989 .

[21]  A. A. J. Marley,et al.  Aggregation theorems and multidimensional stochastic choice models , 1991 .

[22]  Robert L. Winkler,et al.  Aggregating Point Estimates: A Flexible Modeling Approach , 1993 .

[23]  I. J. Myung,et al.  Maximum entropy interpretation of decision bound and context models of categorization , 1994 .