Incentive-Compatible Forecasting Competitions

We consider the design of forecasting competitions in which multiple forecasters make predictions about one or more independent events and compete for a single prize. We have two objectives: (1) to award the prize to the most accurate fore- caster, and (2) to incentivize forecasters to report truthfully, so that forecasts are informative and forecasters need not spend any cognitive effort strategizing about reports. Proper scoring rules incentivize truthful reporting if all forecasters are paid according to their scores. However, incentives become dis- torted if only the best-scoring forecaster wins a prize, since forecasters can often increase their probability of having the highest score by reporting extreme beliefs. Even if forecasters do report truthfully, awarding the prize to the forecaster with highest score does not guarantee that high-accuracy forecasters are likely to win; in extreme cases, it can result in a perfect forecaster having zero probability of winning. In this pa- per, we introduce a truthful forecaster selection mechanism. We lower-bound the probability that our mechanism selects the most accurate forecaster, and give rates for how quickly this bound approaches 1 as the number of events grows. Our techniques can be generalized to the related problems of out- putting a ranking over forecasters and hiring a forecaster with high accuracy on future events.

[1]  Edi Karni,et al.  A Mechanism for Eliciting Probabilities , 2009 .

[2]  David M. Pennock,et al.  The Double Clinching Auction for Wagering , 2017, EC.

[3]  Asa B. Palley,et al.  Extracting the Wisdom of Crowds When Information Is Shared , 2018, Manag. Sci..

[4]  Yael Grushka-Cockayne,et al.  Quantile Evaluation, Sensitivity to Bracketing, and Sharing Business Payoffs , 2016, Oper. Res..

[5]  W. Hoeffding Probability Inequalities for sums of Bounded Random Variables , 1963 .

[6]  J McCarthy,et al.  MEASURES OF THE VALUE OF INFORMATION. , 1956, Proceedings of the National Academy of Sciences of the United States of America.

[7]  Yigal Gerchak,et al.  Elicitation of Probabilities Using Competitive Scoring Rules , 2004, Decis. Anal..

[8]  L. J. Savage Elicitation of Personal Probabilities and Expectations , 1971 .

[9]  G. Brier VERIFICATION OF FORECASTS EXPRESSED IN TERMS OF PROBABILITY , 1950 .

[10]  Umesh V. Vazirani,et al.  An Introduction to Computational Learning Theory , 1994 .

[11]  Nicolas S. Lambert Probability Elicitation for Agents with Arbitrary Risk Preferences ∗ , 2009 .

[12]  Yael Grushka-Cockayne,et al.  The Wisdom of Competitive Crowds , 2013, Oper. Res..

[13]  Nikhil R. Devanur,et al.  Removing arbitrage from wagering mechanisms , 2014, EC.

[14]  Victor Richmond R. Jose,et al.  Percentage and Relative Error Measures in Forecast Evaluation , 2017, Oper. Res..

[15]  Robert L. Winkler,et al.  Probability Elicitation, Scoring Rules, and Competition Among Forecasters , 2007, Manag. Sci..

[16]  David M. Pennock,et al.  Prediction Markets: Does Money Matter? , 2004, Electron. Mark..

[17]  Sydney E. Scott,et al.  Psychological Strategies for Winning a Geopolitical Forecasting Tournament , 2014, Psychological science.

[18]  A. Raftery,et al.  Strictly Proper Scoring Rules, Prediction, and Estimation , 2007 .

[19]  Philip E. Tetlock,et al.  Superforecasting: The Art and Science of Prediction , 2015 .

[20]  John Langford,et al.  Self-financed wagering mechanisms for forecasting , 2008, EC '08.

[21]  Jonathan Baron,et al.  Combining multiple probability predictions using a simple logit model , 2014 .

[22]  M. Schervish A General Method for Comparing Probability Assessors , 1989 .

[23]  Philip E. Tetlock,et al.  Distilling the Wisdom of Crowds: Prediction Markets versus Prediction Polls , 2017 .

[24]  Victor Richmond R. Jose,et al.  A Characterization for the Spherical Scoring Rule , 2009 .