Impact of Task Recommendation Systems in Crowdsourcing Platforms

Commercial crowdsourcing platforms accumulate hundreds of thousand of tasks with a wide range of different rewards, durations, and skill requirements. This makes it difficult for workers to find tasks that match their preferences and their skill set. As a consequence, recommendation systems formatching tasks andworkers gainmore and more importance. In this work we have a look on how these recommendation systems may influence different fairness aspects for workers like the success rate and the earnings. To draw generalizable conclusions, we use a simple simulation model that allows us to consider different types of crowdsourcing platforms, workers, and tasks in the evaluation. We show that even simple recommendation systems lead to improvements for most platform users. However, our results also indicate and shall raise the awareness that a small fraction of users is also negatively affected by those systems. ACM Reference format: Kathrin Borchert, Matthias Hirth and Steffen Schnitzer, Christoph Rensing. 2017. Impact of Task Recommendation Systems in Crowdsourcing Platforms. In Proceedings of Workshop on Responsible Recommendation at RecSys 2017, Como, Italy, August 2017 (FATREC’17), 6 pages. https://doi.org/10.18122/B2CX1Q

[1]  Michael Carl Tschantz,et al.  Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..

[2]  Phuoc Tran-Gia,et al.  Anatomy of a Crowdsourcing Platform - Using the Example of Microworkers.com , 2011, 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing.

[3]  Jun Sakuma,et al.  Fairness-Aware Classifier with Prejudice Remover Regularizer , 2012, ECML/PKDD.

[4]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[5]  Gianluca Demartini,et al.  Pick-a-crowd: tell me what you like, and i'll tell you what to do , 2013, CIDR.

[6]  Stefan Dietze,et al.  A taxonomy of microtasks on the web , 2014, HT.

[7]  Kwong-Sak Leung,et al.  TaskRec: A Task Recommendation Framework in Crowdsourcing Systems , 2015, Neural Processing Letters.

[8]  Phuoc Tran-Gia,et al.  Demands on task recommendation in crowdsourcing platforms-the worker`s perspective , 2015 .

[9]  Senjuti Basu Roy,et al.  Feature Based Task Recommendation in Crowdsourcing with Implicit Observations , 2016, ArXiv.

[10]  Judith Redi,et al.  Modeling Task Complexity in Crowdsourcing , 2016, HCOMP.

[11]  Phuoc Tran-Gia,et al.  Modeling crowdsourcing platforms to enable workforce dimensioning , 2015, 2015 International Telecommunication Networks and Applications Conference (ITNAC).

[12]  Sihem Amer-Yahia,et al.  Task assignment optimization in knowledge-intensive crowdsourcing , 2015, The VLDB Journal.

[13]  Chien-Ju Ho,et al.  Online Task Assignment in Crowdsourcing Markets , 2012, AAAI.

[14]  Kwong-Sak Leung,et al.  Task Matching in Crowdsourcing , 2011, 2011 International Conference on Internet of Things and 4th International Conference on Cyber, Physical and Social Computing.

[15]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[16]  Salvatore Ruggieri,et al.  A multidisciplinary survey on discrimination analysis , 2013, The Knowledge Engineering Review.

[17]  Martin Schader,et al.  Personalized task recommendation in crowdsourcing information systems - Current state of the art , 2014, Decis. Support Syst..

[18]  Jaime G. Carbonell,et al.  Towards Task Recommendation in Micro-Task Markets , 2011, Human Computation.