Incentives for truthful reporting in crowdsourcing
暂无分享,去创建一个
A challenge with the programmatic access of human talent via crowdsourcing platforms is the specification of incentives and the checking of the quality of contributions. Methodologies for checking quality include providing a payment if the work is approved by the task owner and hiring additional workers to evaluate contributors' work. Both of these approaches place a burden on people and on the organizations commissioning tasks, and may be susceptible to manipulation by workers and task owners. Moreover, neither a task owner nor the task market may know the task well enough to be able to evaluate worker reports. Methodologies for incentivizing workers without external quality checking include rewards based on agreement with a peer worker or with the final output of the system. These approaches are vulnerable to strategic manipulations by workers. Recent experiments on Mechanical Turk have demonstrated the negative influence of manipulations by workers and task owners on crowdsourcing systems [3]. We address this central challenge by introducing incentive mechanisms that promote truthful reporting in crowdsourcing and discourage manipulation by workers and task owners without introducing additional overhead.
[1] Paul Resnick,et al. Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..
[2] Eric Horvitz,et al. Combining human and machine intelligence in large-scale crowdsourcing , 2012, AAMAS.
[3] Aniket Kittur,et al. Crowdsourcing user studies with Mechanical Turk , 2008, CHI.
[4] E. Horvitz,et al. Incentives and Truthful Reporting in Consensus-centric Crowdsourcing , 2012 .