Reward or Penalty: Aligning Incentives of Stakeholders in Crowdsourcing

Crowdsourcing is a promising platform, whereby massive tasks are broadcasted to a crowd of semi-skilled workers by the requester for reliable solutions. In this paper, we consider four key evaluation indices of a crowdsourcing community (i.e., quality, cost, latency, and platform improvement), and demonstrate that these indices involve the interests of the three stakeholders, namely the requester, worker, and crowdsourcing platform. Since the incentives among these three stakeholders always conflict with each other, to elevate the long-term development of the crowdsourcing community, we take the perspective of the whole crowdsourcing community, and design a crowdsourcing mechanism to align incentives of stakeholders together. Specifically, we give workers reward or penalty according to their reporting solutions instead of only nonnegative payment. Furthermore, we find a series of proper reward-penalty function pairs and compute workers personal order values, which can provide different amounts of reward and penalty according to both the workers reporting beliefs and their individual history performances, and keep the incentive of workers at the same time. The proposed mechanism can help latency control, promote quality and platform evolution of crowdsourcing community, and improve the aforementioned four key evaluation indices. Theoretical analysis and experimental results are provided to validate and evaluate the proposed mechanism, respectively.

[1]  Nihar B. Shah,et al.  Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing , 2014, J. Mach. Learn. Res..

[2]  Guoliang Li,et al.  Crowdsourced Data Management: A Survey , 2016, IEEE Transactions on Knowledge and Data Engineering.

[3]  T. Scott-Phillips,et al.  Confidence as an expression of commitment: why misplaced expressions of confidence backfire , 2017 .

[4]  Boi Faltings,et al.  A Robust Bayesian Truth Serum for Non-Binary Signals , 2013, AAAI.

[5]  Paul Resnick,et al.  Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..

[6]  Qiang Liu,et al.  Aggregating Ordinal Labels from Crowds by Minimax Conditional Entropy , 2014, ICML.

[7]  Miao Pan,et al.  Economic-robust transmission opportunity auction in multi-hop wireless networks , 2013, 2013 Proceedings IEEE INFOCOM.

[8]  Laura A. Dabbish,et al.  Designing games with a purpose , 2008, CACM.

[9]  Stephen J. Roberts,et al.  Bayesian Methods for Intelligent Task Assignment in Crowdsourcing Systems , 2015, Decision Making.

[10]  Anirban Dasgupta,et al.  Crowdsourced judgement elicitation with endogenous proficiency , 2013, WWW.

[11]  Murat Demirbas,et al.  Crowdsourcing for Multiple-Choice Question Answering , 2014, AAAI.

[12]  Panagiotis G. Ipeirotis,et al.  Repeated labeling using multiple noisy labelers , 2012, Data Mining and Knowledge Discovery.

[13]  David C. Parkes,et al.  Dwelling on the Negative: Incentivizing Effort in Peer Prediction , 2013, HCOMP.

[14]  K. J. Ray Liu,et al.  On Cost-Effective Incentive Mechanisms in Microtask Crowdsourcing , 2013, IEEE Transactions on Computational Intelligence and AI in Games.

[15]  Mahmoud Al-Ayyoub,et al.  Truthful Spectrum Auctions With Approximate Social-Welfare or Revenue , 2014, IEEE/ACM Transactions on Networking.

[16]  Jeroen B. P. Vuurens,et al.  How Much Spam Can You Take? An Analysis of Crowdsourcing Results to Increase Accuracy , 2011 .

[17]  Masashi Sugiyama,et al.  Task selection for bandit-based task assignment in heterogeneous crowdsourcing , 2015, 2015 Conference on Technologies and Applications of Artificial Intelligence (TAAI).

[18]  Xue-Qi Cheng,et al.  Market Confidence Predicts Stock Price: Beyond Supply and Demand , 2016, PloS one.

[19]  Panagiotis G. Ipeirotis,et al.  Estimating the Completion Time of Crowdsourced Tasks Using Survival Analysis Models , 2011 .

[20]  C. D. Kemp,et al.  Density Estimation for Statistics and Data Analysis , 1987 .

[21]  Prasenjit Dey,et al.  Novel Mechanisms for Online Crowdsourcing with Unreliable, Strategic Agents , 2015, AAAI.

[22]  Daniel Berend,et al.  Consistency of weighted majority votes , 2013, NIPS.

[23]  Boi Faltings,et al.  Incentives for Subjective Evaluations with Private Beliefs , 2015, AAAI.

[24]  Xuemin Shen,et al.  Exploiting mobile crowdsourcing for pervasive cloud services: challenges and solutions , 2015, IEEE Communications Magazine.

[25]  David C. Parkes,et al.  Peer prediction without a common prior , 2012, EC '12.

[26]  Beng Chin Ooi,et al.  CDAS: A Crowdsourcing Data Analytics System , 2012, Proc. VLDB Endow..

[27]  Hongwei Li,et al.  Error Rate Analysis of Labeling by Crowdsourcing , 2013 .

[28]  M. Kearns,et al.  An Algorithm That Finds Truth Even If Most People Are Wrong , 2007 .

[29]  Sihua Liang,et al.  Positive solutions for boundary value problems of nonlinear fractional differential equation , 2009 .

[30]  Gerardo Hermosillo,et al.  Learning From Crowds , 2010, J. Mach. Learn. Res..

[31]  Sven Seuken,et al.  Incentive-Compatible Escrow Mechanisms , 2011, AAAI.

[32]  Ning Chen,et al.  Incentives for Strategic Behavior in Fisher Market Games , 2016, AAAI.

[33]  Daqing Zhang,et al.  Mobile crowd sensing and computing: when participatory sensing meets participatory social media , 2015, IEEE Communications Magazine.