Peer Truth Serum: Incentives for Crowdsourcing Measurements and Opinions

Modern decision making tools are based on statistical analysis of abundant data, which is often collected by querying multiple individuals. We consider data collection through crowdsourcing, where independent and self-interested agents, non-experts, report measurements, such as sensor readings, opinions, such as product reviews, or answers to human intelligence tasks. Since the accuracy of information is positively correlated with the effort invested in obtaining it, self-interested agents tend to report low-quality data. Therefore, there is a need for incentives that cover the cost of effort, while discouraging random reports. We propose a novel incentive mechanism called Peer Truth Serum that encourages truthful and accurate reporting, showing that it is the unique mechanism to satisfy a combination of desirable properties.

[1]  Yoav Shoham,et al.  Truthful Surveys , 2008, WINE.

[2]  Yiling Chen,et al.  Output Agreement Mechanisms and Common Knowledge , 2014, HCOMP.

[3]  Yoav Shoham,et al.  Eliciting truthful answers to multiple-choice questions , 2009, EC '09.

[4]  Andreas Krause,et al.  Toward Community Sensing , 2008, 2008 International Conference on Information Processing in Sensor Networks (ipsn 2008).

[5]  Blake Riley,et al.  Minimum Truth Serums with Optional Predictions , 2014 .

[6]  Boi Faltings,et al.  Incentive Mechanisms for Community Sensing , 2014, IEEE Transactions on Computers.

[7]  Boi Faltings,et al.  A Robust Bayesian Truth Serum for Non-Binary Signals , 2013, AAAI.

[8]  Paul Resnick,et al.  Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..

[9]  Gianluca Demartini,et al.  Mechanical Cheat: Spamming Schemes and Adversarial Techniques on Crowdsourcing Platforms , 2012, CrowdSearch.

[10]  Chrysanthos Dellarocas The Digitization of Word-of-Mouth: Promise and Challenges of Online Reputation Systems , 2001 .

[11]  Fan Ye,et al.  Mobile crowdsensing: current state and future challenges , 2011, IEEE Communications Magazine.

[12]  David M. Pennock,et al.  Collective revelation: a mechanism for self-verified, weighted, and truthful predictions , 2009, EC '09.

[13]  D. Prelec A Bayesian Truth Serum for Subjective Data , 2004, Science.

[14]  Boi Faltings,et al.  Incentives for Subjective Evaluations with Private Beliefs , 2015, AAAI.

[15]  Boi Faltings,et al.  An incentive compatible reputation mechanism , 2003, AAMAS '03.

[16]  Kate Larson,et al.  The output-agreement method induces honest behavior in the presence of social projection , 2014, SECO.

[17]  Jens Witkowski Robust peer prediction mechanisms , 2015 .

[18]  F. Galton Vox Populi , 1907, Nature.

[19]  Wai-Tat Fu,et al.  Enhancing reliability using peer consistency evaluation in human computation , 2013, CSCW '13.

[20]  Boi Faltings,et al.  Reliable QoS monitoring based on client feedback , 2007, WWW '07.

[21]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[22]  David M. Pennock,et al.  Designing Markets for Prediction , 2010, AI Mag..

[23]  Laura A. Dabbish,et al.  Designing games with a purpose , 2008, CACM.

[24]  David M. Pennock,et al.  The Real Power of Artificial Markets , 2001, Science.

[25]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.

[26]  Boi Faltings,et al.  Incentives for Answering Hypothetical Questions , 2011 .

[27]  Anirban Dasgupta,et al.  Crowdsourced judgement elicitation with endogenous proficiency , 2013, WWW.

[28]  Boi Faltings,et al.  Mechanisms for Making Crowds Truthful , 2014, J. Artif. Intell. Res..

[29]  David C. Parkes,et al.  A Robust Bayesian Truth Serum for Small Populations , 2012, AAAI.

[30]  G. Brier VERIFICATION OF FORECASTS EXPRESSED IN TERMS OF PROBABILITY , 1950 .

[31]  David C. Parkes,et al.  Peer prediction without a common prior , 2012, EC '12.

[32]  Boi Faltings,et al.  Incentives for Truthful Information Elicitation of Continuous Signals , 2014, AAAI.

[33]  Nicholas R. Jennings,et al.  Mechanism design for the truthful elicitation of costly probabilistic estimates in distributed information systems , 2011, Artif. Intell..

[34]  Boi Faltings,et al.  Incentives for expressing opinions in online polls , 2008, EC '08.

[35]  Jens Witkowski,et al.  A Geometric Method to Construct Minimal Peer Prediction Mechanisms , 2016, AAAI.

[36]  Yiling Chen,et al.  Elicitability and knowledge-free elicitation with peer prediction , 2014, AAMAS.

[37]  L. J. Savage Elicitation of Personal Probabilities and Expectations , 1971 .

[38]  Boi Faltings,et al.  Incentives to Counter Bias in Human Computation , 2014, HCOMP.

[39]  David C. Parkes,et al.  Learning the Prior in Minimal Peer Prediction , 2013 .

[40]  A. Raftery,et al.  Strictly Proper Scoring Rules, Prediction, and Estimation , 2007 .

[41]  David C. Parkes,et al.  Dwelling on the Negative: Incentivizing Effort in Peer Prediction , 2013, HCOMP.

[42]  Robert Forsythe,et al.  Anatomy of an Experimental Political Stock Market , 1992 .

[43]  Boi Faltings,et al.  Swissnoise: Online Polls with Game-Theoretic Incentives , 2014, AAAI.