MeLoDy: A Long-Term Dynamic Quality-Aware Incentive Mechanism for Crowdsourcing

Crowdsourcing allows requesters to allocate tasks to a group of workers on the Internet to make use of their collective intelligence. Quality control is a key design objective in incentive mechanisms for crowdsourcing as requesters aim at obtaining high-quality answers under a limited budget. However, when measuring workers’ long-term quality, existing mechanisms either fail to utilize workers’ historical information, or treat workers’ quality as stable and ignore its temporal characteristics, hence performing poorly in a long run. In this paper we propose MeLoDy , a long-term dynamic quality-aware incentive mechanism for crowdsourcing. MeLoDy models interaction between requesters and workers as reverse auctions that run continuously. In each run of MeLoDy , we design a truthful, individual rational, budget feasible and quality-aware algorithm for task allocation with polynomial-time computation complexity and $O(1)$ performance ratio. Moreover, taking into consideration the long-term characteristics of workers’ quality, we propose a novel framework in MeLoDy for quality inference and parameters learning based on Linear Dynamical Systems at the end of each run, which takes full advantage of workers’ historical information and predicts their quality accurately. Through extensive simulations, we demonstrate that MeLoDy outperforms existing work in terms of both quality estimation (reducing estimation error by $17.6\% \sim 24.2\%$ ) and social performance (increasing requester's utility by $18.2\% \sim 46.6\%$ ) in long-term scenarios.

[1]  Geoffrey E. Hinton,et al.  Parameter estimation for linear dynamical systems , 1996 .

[2]  David C. Parkes,et al.  Designing incentives for online question and answer forums , 2009, EC '09.

[3]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.

[4]  Guihai Chen,et al.  Pay as How Well You Do: A Quality Based Incentive Mechanism for Crowdsensing , 2015, MobiHoc.

[5]  Pietro Perona,et al.  The Multidimensional Wisdom of Crowds , 2010, NIPS.

[6]  Tom Minka,et al.  How To Grade a Test Without Knowing the Answers - A Bayesian Graphical Model for Adaptive Crowdsourcing and Aptitude Testing , 2012, ICML.

[7]  Xiaoying Gan,et al.  Incentivize Multi-Class Crowd Labeling Under Budget Constraint , 2017, IEEE Journal on Selected Areas in Communications.

[8]  Yang Liu,et al.  A Bandit Framework for Strategic Regression , 2016, NIPS.

[9]  Panagiotis G. Ipeirotis,et al.  Quizz: targeted crowdsourcing with a billion (potential) users , 2014, WWW.

[10]  Fenglong Ma,et al.  Crowdsourcing High Quality Labels with a Tight Budget , 2016, WSDM.

[11]  Aleksandrs Slivkins,et al.  Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems , 2016, J. Artif. Intell. Res..

[12]  Gerardo Hermosillo,et al.  Learning From Crowds , 2010, J. Mach. Learn. Res..

[13]  Chi Zhang,et al.  Truthful Scheduling Mechanisms for Powering Mobile Crowdsensing , 2013, IEEE Transactions on Computers.

[14]  Aleksandrs Slivkins,et al.  Incentivizing high quality crowdwork , 2015, SECO.

[15]  Mingyan Liu,et al.  An Online Learning Approach to Improving the Quality of Crowd-Sourcing , 2015, SIGMETRICS.

[16]  Hao Wu,et al.  Relationship between quality and payment in crowdsourced design , 2014, Proceedings of the 2014 IEEE 18th International Conference on Computer Supported Cooperative Work in Design (CSCWD).

[17]  Iordanis Koutsopoulos,et al.  Optimal incentive-driven design of participatory sensing systems , 2013, 2013 Proceedings IEEE INFOCOM.

[18]  Xi Chen,et al.  Optimal PAC Multiple Arm Identification with Applications to Crowdsourcing , 2014, ICML.

[19]  Brendan T. O'Connor,et al.  Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.

[20]  Siddharth Suri,et al.  Conducting behavioral research on Amazon’s Mechanical Turk , 2010, Behavior research methods.

[21]  Andreas Krause,et al.  Near-Optimally Teaching the Crowd to Classify , 2014, ICML.

[22]  Yuandong Tian,et al.  Learning from crowds in the presence of schools of thought , 2012, KDD.

[23]  Klara Nahrstedt,et al.  Quality of Information Aware Incentive Mechanisms for Mobile Crowd Sensing Systems , 2015, MobiHoc.

[24]  Jing Wang,et al.  Quality-Aware and Fine-Grained Incentive Mechanisms for Mobile Crowdsensing , 2016, 2016 IEEE 36th International Conference on Distributed Computing Systems (ICDCS).

[25]  J. B. G. Frenk,et al.  Two Simple Algorithms for bin Covering , 1999, Acta Cybern..

[26]  Lin Gao,et al.  Providing long-term participation incentive in participatory sensing , 2015, 2015 IEEE Conference on Computer Communications (INFOCOM).

[27]  Anirban Dasgupta,et al.  Aggregating crowdsourced binary ratings , 2013, WWW.

[28]  Xiaohua Tian,et al.  Quality-Driven Auction-Based Incentive Mechanism for Mobile Crowd Sensing , 2015, IEEE Transactions on Vehicular Technology.

[29]  Gagan Goel,et al.  Mechanism Design for Crowdsourcing: An Optimal 1-1/e Competitive Budget-Feasible Mechanism for Large Markets , 2014, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science.

[30]  Nihar B. Shah,et al.  Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing , 2014, J. Mach. Learn. Res..

[31]  Thomas P. Minka,et al.  From Hidden Markov Models to Linear Dynamical Systems , 1999 .

[32]  Björn Hartmann,et al.  What's the Right Price? Pricing Tasks for Finishing on Time , 2011, Human Computation.

[33]  Jennifer Widom,et al.  CrowdScreen: algorithms for filtering data with humans , 2012, SIGMOD Conference.

[34]  Xiaoying Gan,et al.  Incentivize crowd labeling under budget constraint , 2015, 2015 IEEE Conference on Computer Communications (INFOCOM).

[35]  Mihaela van der Schaar,et al.  Reputation-based incentive protocols in crowdsourcing applications , 2011, 2012 Proceedings IEEE INFOCOM.

[36]  Bo Yang,et al.  Efficient network management for context-aware participatory sensing , 2011, 2011 8th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks.

[37]  Jian Tang,et al.  Truthful incentive mechanisms for crowdsourcing , 2015, 2015 IEEE Conference on Computer Communications (INFOCOM).

[38]  T. Moon The expectation-maximization algorithm , 1996, IEEE Signal Process. Mag..

[39]  Xiang-Yang Li,et al.  How to crowdsource tasks truthfully without sacrificing utility: Online incentive mechanisms with budget constraint , 2014, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[40]  Joseph Y.-T. Leung,et al.  On a Dual Version of the One-Dimensional Bin Packing Problem , 1984, J. Algorithms.

[41]  Yaron Singer,et al.  Pricing mechanisms for crowdsourcing markets , 2013, WWW.

[42]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[43]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[44]  Shaojie Tang,et al.  Good Work Deserves Good Pay: A Quality-Based Surplus Sharing Method for Participatory Sensing , 2015, 2015 44th International Conference on Parallel Processing.

[45]  John C. Platt,et al.  Learning from the Wisdom of Crowds by Minimax Entropy , 2012, NIPS.

[46]  Bo Zhao,et al.  The wisdom of minority: discovering and targeting the right group of workers for crowdsourcing , 2014, WWW.