Learning to Incentivize: Eliciting Effort via Output Agreement

In crowdsourcing when there is a lack of verification for contributed answers, output agreement mechanisms are often used to incentivize participants to provide truthful answers when the correct answer is hold by the majority. In this paper, we focus on using output agreement mechanisms to elicit effort, in addition to eliciting truthful answers, from a population of workers. We consider a setting where workers have heterogeneous cost of effort exertion and examine the data requester's problem of deciding the reward level in output agreement for optimal elicitation. In particular, when the requester knows the cost distribution, we derive the optimal reward level for output agreement mechanisms. This is achieved by first characterizing Bayesian Nash equilibria of output agreement mechanisms for a given reward level. When the cost distribution is unknown to the requester, we develop sequential mechanisms that combine learning the cost distribution with incentivizing effort exertion to approximately determine the optimal reward level.

[1]  Paul Resnick,et al.  Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..

[2]  David C. Parkes,et al.  Dwelling on the Negative: Incentivizing Effort in Peer Prediction , 2013, HCOMP.

[3]  Jacob Abernethy,et al.  Actively Purchasing Data for Learning , 2015, ArXiv.

[4]  Devavrat Shah,et al.  Efficient crowdsourcing for multi-class labeling , 2013, SIGMETRICS '13.

[5]  Aaron Roth,et al.  Conducting truthful surveys, cheaply , 2012, EC '12.

[6]  Boi Faltings,et al.  Mechanisms for Making Crowds Truthful , 2014, J. Artif. Intell. Res..

[7]  David C. Parkes,et al.  A Robust Bayesian Truth Serum for Small Populations , 2012, AAAI.

[8]  Boi Faltings,et al.  A Robust Bayesian Truth Serum for Non-Binary Signals , 2013, AAAI.

[9]  Paul Resnick,et al.  Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..

[10]  Michael I. Jordan,et al.  Advances in Neural Information Processing Systems 30 , 1995 .

[11]  Panagiotis G. Ipeirotis,et al.  Get another label? improving data quality and data mining using multiple, noisy labelers , 2008, KDD.

[12]  Mingyan Liu,et al.  An Online Learning Approach to Improving the Quality of Crowd-Sourcing , 2015, SIGMETRICS.

[13]  L. Christophorou Science , 2018, Emerging Dynamics: Science, Energy, Society and Values.

[14]  Yiling Chen,et al.  Output Agreement Mechanisms and Common Knowledge , 2014, HCOMP.

[15]  Bin Bi,et al.  Iterative Learning for Reliable Crowdsourcing Systems , 2012 .

[16]  Arpit Agarwal,et al.  Informed Truthfulness in Multi-Task Peer Prediction , 2016, EC.

[17]  Aleksandrs Slivkins,et al.  Incentivizing high quality crowdwork , 2015, SECO.

[18]  Daniel Gooch,et al.  Communications of the ACM , 2011, XRDS.

[19]  D. Prelec A Bayesian Truth Serum for Subjective Data , 2004, Science.

[20]  David C. Parkes,et al.  Peer prediction without a common prior , 2012, EC '12.

[21]  Boi Faltings,et al.  Minimum payments that reward honest reputation feedback , 2006, EC '06.

[22]  Ming Yin,et al.  Bonus or Not? Learn to Reward in Crowdsourcing , 2015, IJCAI.

[23]  Anirban Dasgupta,et al.  Crowdsourced judgement elicitation with endogenous proficiency , 2013, WWW.

[24]  Mingyan Liu,et al.  An Online Learning Approach to Improving the Quality of Crowd-Sourcing , 2015, SIGMETRICS.

[25]  Laura A. Dabbish,et al.  Designing games with a purpose , 2008, CACM.

[26]  Ian A. Kash,et al.  Elicitation for Aggregation , 2014, AAAI.

[27]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.