Honesty Via Choice-Matching

We introduce choice-matching, a class of mechanisms for eliciting honest responses to a multiple choice question (MCQ), as might appear in a market research study, opinion poll, or economics experiment. Under choice-matching, respondents are compensated through an auxiliary task, e.g., a personal consumption choice or a forecast. Their compensation depends both on their performance on the auxiliary task, and on the performance of those respondents who matched their response to the MCQ. We give conditions for such mechanisms to be strictly truth-inducing, focusing on a special case in which the auxiliary task is to predict the answers of other respondents. (JEL C78, C83, D81, D82, D83)

[1]  D. Prelec,et al.  Incentive-Compatible Surveys via Posterior Probabilities , 2020, Theory of Probability & Its Applications.

[2]  Aurélien Baillon,et al.  Bayesian markets to elicit private information , 2017, Proceedings of the National Academy of Sciences.

[3]  Arpit Agarwal,et al.  Peer Prediction with Heterogeneous Users , 2017, EC.

[4]  Yang Liu,et al.  Machine-Learning Aided Peer Prediction , 2017, EC.

[5]  Iyad Rahwan,et al.  Validating Bayesian truth serum in large-scale online human experiments , 2017, PloS one.

[6]  H. Sebastian Seung,et al.  A solution to the single-question crowd wisdom problem , 2017, Nature.

[7]  Boi Faltings,et al.  Incentives for Effort in Crowdsourcing Using the Peer Truth Serum , 2016, ACM Trans. Intell. Syst. Technol..

[8]  Arpit Agarwal,et al.  Informed Truthfulness in Multi-Task Peer Prediction , 2016, EC.

[9]  Boi Faltings,et al.  Incentive Schemes for Participatory Sensing , 2015, AAMAS.

[10]  Boi Faltings,et al.  Incentives for Truthful Information Elicitation of Continuous Signals , 2014, AAAI.

[11]  Yiling Chen,et al.  Elicitability and knowledge-free elicitation with peer prediction , 2014, AAMAS.

[12]  Boi Faltings,et al.  A Robust Bayesian Truth Serum for Non-Binary Signals , 2013, AAAI.

[13]  Drazen Prelec,et al.  Creating Truth-Telling Incentives with the Bayesian Truth Serum , 2013 .

[14]  David C. Parkes,et al.  A Robust Bayesian Truth Serum for Small Populations , 2012, AAAI.

[15]  David C. Parkes,et al.  Peer prediction without a common prior , 2012, EC '12.

[16]  G. Loewenstein,et al.  Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling , 2012, Psychological science.

[17]  J. Sonnemans,et al.  A Truth Serum for Non-Bayesians: Correcting Proper Scoring Rules for Risk Attitudes* , 2009 .

[18]  A. Raftery,et al.  Strictly Proper Scoring Rules, Prediction, and Estimation , 2007 .

[19]  Paul Resnick,et al.  Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..

[20]  D. Prelec A Bayesian Truth Serum for Subjective Data , 2004, Science.

[21]  Norman Miller,et al.  Ten years of research on the false-consensus effect: an empirical and theoretical review , 1987 .

[22]  R. Nau Should Scoring Rules be Effective , 1985 .

[23]  D. Friedman Effective Scoring Rules for Probabilistic Forecasts , 1983 .

[24]  L. Ross,et al.  The “false consensus effect”: An egocentric bias in social perception and attribution processes , 1977 .

[25]  L. J. Savage Elicitation of Personal Probabilities and Expectations , 1971 .

[26]  Grant Schoenebeck,et al.  G T ] 3 M ay 2 01 6 A Framework For Designing Information Elicitation Mechanisms That Reward Truth-telling Yuqing Kong University of Michigan , 2016 .

[27]  A. Baillon A market to read minds , 2016 .

[28]  Sonja Radas,et al.  Mechanism Design for an Agnostic Planner: universal mechanisms, logarithmic equilibrium , 2015 .

[29]  Blake Riley,et al.  Minimum Truth Serums with Optional Predictions , 2014 .

[30]  Yiling Chen,et al.  39 Information Elicitation Sans Verification , 2013 .

[31]  Drazen Prelec,et al.  Finding truth even if the crowd is wrong , 2013 .

[32]  R. Dawes Statistical criteria for establishing a truly false consensus effect , 1989 .