Mathematical Modeling of Crowdsourcing Systems: Incentive Mechanism and Rating System Design

Crowd sourcing systems like Yahoo! Answers, Amazon Mechanical Turk, and Google Helpouts, etc., have seen an increasing prevalence in the past few years. The participation of users, high quality solutions, and a fair rating system are critical to the revenue of a crowd sourcing system. In this paper, we design a class of simple but effective incentive mechanisms to attract users participating, and providing high quality solutions. Our incentive mechanism consists of a task bundling scheme and a rating system, and pay workers according to solution ratings from requesters. We also propose a probabilistic model to capture various human factors like biases in rating, and we quantify its impact on the incentive mechanism, which is shown to be highly robust. We develop a model to characterize the design space of a class of commonly used rating systems - threshold based rating systems. We quantify the impact of such rating systems and the bundling scheme on the incentive mechanism.

[1]  Mihaela van der Schaar,et al.  Towards Social Norm Design for Crowdsourcing Markets , 2012, HCOMP@AAAI.

[2]  R. Preston McAfee,et al.  Who moderates the moderators?: crowdsourcing abuse detection in user-generated content , 2011, EC '11.

[3]  Mihaela van der Schaar,et al.  Reputation-based incentive protocols in crowdsourcing applications , 2011, 2012 Proceedings IEEE INFOCOM.

[4]  Milan Vojnovic,et al.  Crowdsourcing and all-pay auctions , 2009, EC '09.

[5]  Kwong-Sak Leung,et al.  A Survey of Crowdsourcing Systems , 2011, 2011 IEEE Third Int'l Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third Int'l Conference on Social Computing.

[6]  Lydia B. Chilton,et al.  The labor economics of paid crowdsourcing , 2010, EC '10.

[7]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[8]  David C. Parkes,et al.  Designing incentives for online question and answer forums , 2009, EC '09.

[9]  Shilad Sen,et al.  Rating: how difficult is it? , 2011, RecSys '11.

[10]  Manuel Blum,et al.  Peekaboom: a game for locating objects in images , 2006, CHI.

[11]  R. Gibbons Game theory for applied economists , 1992 .

[12]  Ted S. Sindlinger,et al.  Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business , 2010 .

[13]  Chien-Ju Ho,et al.  Online Task Assignment in Crowdsourcing Markets , 2012, AAAI.

[14]  Ke Wang,et al.  Bias and Controversy in Evaluation Systems , 2008, IEEE Transactions on Knowledge and Data Engineering.

[15]  Panagiotis G. Ipeirotis,et al.  Running Experiments on Amazon Mechanical Turk , 2010, Judgment and Decision Making.

[16]  Paul Hyman Software aims to ensure fairness in crowdsourcing projects , 2013, CACM.

[17]  Lada A. Adamic,et al.  Knowledge sharing and yahoo answers: everyone knows something , 2008, WWW.

[18]  Nicholas R. Jennings,et al.  Efficient crowdsourcing of unknown experts using bounded multi-armed bandits , 2014, Artif. Intell..

[19]  Jeff Howe,et al.  Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business , 2008, Human Resource Management International Digest.

[20]  Nicholas R. Jennings,et al.  Efficient Crowdsourcing of Unknown Experts using Multi-Armed Bandits , 2012, ECAI.

[21]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[22]  Ke Wang,et al.  Quality and Leniency in Online Collaborative Rating Systems , 2012, TWEB.

[23]  Mihaela van der Schaar,et al.  Socially-optimal design of crowdsourcing platforms with reputation update errors , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[24]  David C. Parkes,et al.  Designing incentives for online question and answer forums , 2009, EC '09.

[25]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.