Fairness and Transparency in Crowdsourcing

Despite the success of crowdsourcing, the question of ethics has not yet been addressed in its entirety. Existing efforts have studied fairness in worker compensation and in helping requesters detect malevolent workers. In this paper, we propose fairness axioms that generalize existing work and pave the way to studying fairness for task assignment, task completion, and worker compensation. Transparency on the other hand, has been addressed with the development of plug-ins and forums to track workers’ performance and rate requesters. Similarly to fairness, we define transparency axioms and advocate the need to address it in a holistic manner by providing declarative specifications. We also discuss how fairness and transparency could be enforced and evaluated in a crowdsourcing platform.

[1]  Panagiotis G. Ipeirotis,et al.  Quality-Based Pricing for Crowdsourced Workers , 2013 .

[2]  Keith Kirkpatrick,et al.  Battling algorithmic bias , 2016, Commun. ACM.

[3]  Daniel J. Veit,et al.  More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk , 2011, AMCIS.

[4]  Latanya Sweeney,et al.  Discrimination in online ad delivery , 2013, CACM.

[5]  Jennifer Marlow,et al.  Who's the boss?: requester transparency and motivation in a microtask marketplace , 2014, CHI Extended Abstracts.

[6]  M Damashek,et al.  Gauging Similarity with n-Grams: Language-Independent Categorization of Text , 1995, Science.

[7]  Laurent Bussard,et al.  A Practical Generic Privacy Language , 2010, ICISS.

[8]  Chris Callison-Burch,et al.  Crowd-Workers: Aggregating Information Across Turkers to Help Them Find Higher Paying Work , 2014, HCOMP.

[9]  Dan Cosley,et al.  Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Turk , 2016, CHI.

[10]  Laura A. Dabbish,et al.  Workflow transparency in a microtask marketplace , 2012, GROUP.

[11]  Benjamin B. Bederson,et al.  Web workers unite! addressing challenges of online laborers , 2011, CHI Extended Abstracts.

[12]  Devavrat Shah,et al.  Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems , 2011, Oper. Res..

[13]  Chien-Ju Ho,et al.  Online Task Assignment in Crowdsourcing Markets , 2012, AAAI.

[14]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[15]  Björn Hartmann,et al.  MobileWorks: Designing for Quality in a Managed Crowdsourcing Architecture , 2012, IEEE Internet Computing.

[16]  David B. Martin,et al.  TurkBench: Rendering the Market for Turkers , 2015, CHI.

[17]  Francesco Bonchi,et al.  Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining , 2016, KDD.

[18]  Jeroen B. P. Vuurens,et al.  How Much Spam Can You Take? An Analysis of Crowdsourcing Results to Increase Accuracy , 2011 .

[19]  Sihem Amer-Yahia,et al.  Crowdsourcing Reliable Ratings for Underexposed Items , 2016, WEBIST.

[20]  Chien-Ju Ho,et al.  Adaptive Task Assignment for Crowdsourced Classification , 2013, ICML.

[21]  M. Six Silberman,et al.  Turkopticon: interrupting worker invisibility in amazon mechanical turk , 2013, CHI.