Incentives for Effort in Crowdsourcing Using the Peer Truth Serum

Crowdsourcing is widely proposed as a method to solve a large variety of judgment tasks, such as classifying website content, peer grading in online courses, or collecting real-world data. As the data reported by workers cannot be verified, there is a tendency to report random data without actually solving the task. This can be countered by making the reward for an answer depend on its consistency with answers given by other workers, an approach called peer consistency. However, it is obvious that the best strategy in such schemes is for all workers to report the same answer without solving the task. Dasgupta and Ghosh [2013] show that, in some cases, exerting high effort can be encouraged in the highest-paying equilibrium. In this article, we present a general mechanism that implements this idea and is applicable to most crowdsourcing settings. Furthermore, we experimentally test the novel mechanism, and validate its theoretical properties.

[1]  M. Kearns,et al.  An Algorithm That Finds Truth Even If Most People Are Wrong , 2007 .

[2]  Wai-Tat Fu,et al.  Enhancing reliability using peer consistency evaluation in human computation , 2013, CSCW '13.

[3]  Boi Faltings,et al.  Incentives for Answering Hypothetical Questions , 2011 .

[4]  Ryan P. Adams,et al.  Trick or treat: putting peer prediction to the test , 2014 .

[5]  Roy N. Colvile,et al.  Uncertainty in dispersion modelling and urban air quality mapping , 2002 .

[6]  David M. Pennock,et al.  Collective revelation: a mechanism for self-verified, weighted, and truthful predictions , 2009, EC '09.

[7]  Boi Faltings,et al.  Incentives to Counter Bias in Human Computation , 2014, HCOMP.

[8]  Yoav Shoham,et al.  Truthful Surveys , 2008, WINE.

[9]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[10]  Andreas Krause,et al.  Truthful incentives in crowdsourcing tasks using regret minimization mechanisms , 2013, WWW.

[11]  Yiling Chen,et al.  Output Agreement Mechanisms and Common Knowledge , 2014, HCOMP.

[12]  Christopher G. Harris You're Hired! An Examination of Crowdsourcing Incentive Models in Human Resource Tasks , 2011 .

[13]  Boi Faltings,et al.  Mechanisms for Making Crowds Truthful , 2014, J. Artif. Intell. Res..

[14]  Wai-Tat Fu,et al.  Don't hide in the crowd!: increasing social transparency between peer workers improves crowdsourcing outcomes , 2013, CHI.

[15]  Boi Faltings,et al.  A Robust Bayesian Truth Serum for Non-Binary Signals , 2013, AAAI.

[16]  Paul Resnick,et al.  Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..

[17]  David C. Parkes,et al.  Learning the Prior in Minimal Peer Prediction , 2013 .

[18]  David C. Parkes,et al.  Random Utility Theory for Social Choice , 2012, NIPS.

[19]  David C. Parkes,et al.  A Robust Bayesian Truth Serum for Small Populations , 2012, AAAI.

[20]  Aaron D. Shaw,et al.  Designing incentives for inexpert human raters , 2011, CSCW.

[21]  Anirban Dasgupta,et al.  Crowdsourced judgement elicitation with endogenous proficiency , 2013, WWW.

[22]  Boi Faltings,et al.  Swissnoise: Online Polls with Game-Theoretic Incentives , 2014, AAAI.

[23]  Boi Faltings,et al.  Incentives for Truthful Information Elicitation of Continuous Signals , 2014, AAAI.

[24]  Nicholas R. Jennings,et al.  Mechanism design for the truthful elicitation of costly probabilistic estimates in distributed information systems , 2011, Artif. Intell..

[25]  Yiling Chen,et al.  Elicitability and knowledge-free elicitation with peer prediction , 2014, AAMAS.

[26]  David C. Parkes,et al.  Peer prediction without a common prior , 2012, EC '12.

[27]  Eric Horvitz,et al.  Incentives for truthful reporting in crowdsourcing , 2012, AAMAS.

[28]  Boi Faltings,et al.  Incentive Mechanisms for Community Sensing , 2014, IEEE Transactions on Computers.

[29]  Hai Yang,et al.  ACM Transactions on Intelligent Systems and Technology - Special Section on Urban Computing , 2014 .

[30]  Paul Resnick,et al.  Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..

[31]  D. Prelec A Bayesian Truth Serum for Subjective Data , 2004, Science.

[32]  Boi Faltings,et al.  Incentives for Subjective Evaluations with Private Beliefs , 2015, AAAI.

[33]  Boi Faltings,et al.  Incentive Schemes for Participatory Sensing , 2015, AAMAS.

[34]  David C. Parkes,et al.  Dwelling on the Negative: Incentivizing Effort in Peer Prediction , 2013, HCOMP.