CrowdCog

While crowd workers typically complete a variety of tasks in crowdsourcing platforms, there is no widely accepted method to successfully match workers to different types of tasks. Researchers have considered using worker demographics, behavioural traces, and prior task completion records to optimise task assignment. However, optimum task assignment remains a challenging research problem due to limitations of proposed approaches, which in turn can have a significant impact on the future of crowdsourcing. We present 'CrowdCog', an online dynamic system that performs both task assignment and task recommendations, by relying on fast-paced online cognitive tests to estimate worker performance across a variety of tasks. Our work extends prior work that highlights the effect of workers' cognitive ability on crowdsourcing task performance. Our study, deployed on Amazon Mechanical Turk, involved 574 workers and 983 HITs that span across four typical crowd tasks (Classification, Counting, Transcription, and Sentiment Analysis). Our results show that both our assignment method and recommendation method result in a significant performance increase (5% to 20%) as compared to a generic or random task assignment. Our findings pave the way for the use of quick cognitive tests to provide robust recommendations and assignments to crowd workers.

[1]  Jeff Howe,et al.  Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business , 2008, Human Resource Management International Digest.

[2]  Reynold Cheng,et al.  QASCA: A Quality-Aware Task Assignment System for Crowdsourcing Applications , 2015, SIGMOD Conference.

[3]  Gabriella Kazai,et al.  Worker types and personality traits in crowdsourcing relevance labels , 2011, CIKM '11.

[4]  Niels van Berkel,et al.  "Hi! I am the Crowd Tasker" Crowdsourcing through Digital Voice Assistants , 2020, CHI.

[5]  Michael S. Bernstein,et al.  The future of crowd work , 2013, CSCW.

[6]  Amy L. Kristof PERSON-ORGANIZATION FIT: AN INTEGRATIVE REVIEW OF ITS CONCEPTUALIZATIONS, MEASUREMENT, AND IMPLICATIONS , 1996 .

[7]  Elisa Bertino,et al.  Quality Control in Crowdsourcing Systems: Issues and Directions , 2013, IEEE Internet Computing.

[8]  Stefan Dietze,et al.  Crowd Anatomy Beyond the Good and Bad: Behavioral Traces for Crowd Worker Modeling and Pre-selection , 2018, Computer Supported Cooperative Work (CSCW).

[9]  CHARLES E. Bailey Cognitive Accuracy and Intelligent Executive Function in the Brain and in Business , 2007, Annals of the New York Academy of Sciences.

[10]  Lydia B. Chilton,et al.  Task search in a human computation market , 2010, HCOMP '10.

[11]  Panagiotis G. Ipeirotis,et al.  The Dynamics of Micro-Task Crowdsourcing: The Case of Amazon MTurk , 2015, WWW.

[12]  Jorge Gonçalves,et al.  CrowdPickUp , 2017, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[13]  K. Stanovich,et al.  Heuristics and Biases as Measures of Critical Thinking: Associations with Cognitive Ability and Thinking Dispositions , 2008 .

[14]  Alan C. Evans,et al.  Dissociation of human mid-dorsolateral from posterior dorsolateral frontal cortex in memory processing. , 1993, Proceedings of the National Academy of Sciences of the United States of America.

[15]  Kwong-Sak Leung,et al.  TaskRec: A Task Recommendation Framework in Crowdsourcing Systems , 2015, Neural Processing Letters.

[16]  Jennifer Widom,et al.  CrowdScreen: algorithms for filtering data with humans , 2012, SIGMOD Conference.

[17]  Stefan Dietze,et al.  Using Worker Self-Assessments for Competence-Based Pre-Selection in Crowdsourcing Microtasks , 2017, ACM Trans. Comput. Hum. Interact..

[18]  B. Hommel The Simon effect as tool and heuristic. , 2011, Acta psychologica.

[19]  Boualem Benatallah,et al.  Quality Control in Crowdsourcing , 2018, ACM Comput. Surv..

[20]  M. Lezak,et al.  Neuropsychological assessment, 4th ed. , 2004 .

[21]  A. Acquisti,et al.  Reputation as a sufficient condition for data quality on Amazon Mechanical Turk , 2013, Behavior Research Methods.

[22]  Ujwal Gadiraju,et al.  In What Mood Are You Today?: An Analysis of Crowd Workers' Mood, Performance and Engagement , 2019, WebSci.

[23]  Erika Borella,et al.  The Specific Role of Inhibition in Reading Comprehension in Good and Poor Comprehenders , 2010, Journal of learning disabilities.

[24]  C. Eriksen,et al.  Effects of noise letters upon the identification of a target letter in a nonsearch task , 1974 .

[25]  P. Rabbitt,et al.  Cambridge Neuropsychological Test Automated Battery (CANTAB): a factor analytic study of a large sample of normal elderly volunteers. , 1994, Dementia.

[26]  Gabriella Kazai,et al.  The face of quality in crowdsourcing relevance labels: demographics, personality and labeling accuracy , 2012, CIKM.

[27]  Martin Schader,et al.  Personalized task recommendation in crowdsourcing information systems - Current state of the art , 2014, Decis. Support Syst..

[28]  Jorge Gonçalves,et al.  Task Routing and Assignment in Crowdsourcing based on Cognitive Abilities , 2017, WWW.

[29]  Stefan Dietze,et al.  A taxonomy of microtasks on the web , 2014, HT.

[30]  Gianluca Demartini,et al.  Pick-a-crowd: tell me what you like, and i'll tell you what to do , 2013, CIDR.

[31]  Beng Chin Ooi,et al.  iCrowd: An Adaptive Crowdsourcing Framework , 2015, SIGMOD Conference.

[32]  Colin M. Macleod Half a century of research on the Stroop effect: an integrative review. , 1991, Psychological bulletin.

[33]  Jennifer Widom,et al.  Understanding Workers, Developing Effective Tasks, and Enhancing Marketplace Dynamics: A Study of a Large Crowdsourcing Marketplace , 2017, Proc. VLDB Endow..

[34]  Jorge Gonçalves,et al.  Crowdsourcing on the spot: altruistic use of public displays, feasibility, performance, and behaviours , 2013, UbiComp.

[35]  Fred J. Damerau,et al.  A technique for computer detection and correction of spelling errors , 1964, CACM.

[36]  Todd M. Gureckis,et al.  CUNY Academic , 2016 .

[37]  F. Schmidt,et al.  General mental ability in the world of work: occupational attainment and job performance. , 2004, Journal of personality and social psychology.

[38]  David Gross-Amblard,et al.  Using Hierarchical Skills for Optimized Task Assignment in Knowledge-Intensive Crowdsourcing , 2016, WWW.

[39]  Carsten Eickhoff,et al.  Cognitive Biases in Crowdsourcing , 2018, WSDM.

[40]  Shuguang Han,et al.  Crowdsourcing Human Annotation on Web Page Structure , 2016, ACM Trans. Intell. Syst. Technol..

[41]  Jorge Gonçalves,et al.  Game of words: tagging places through crowdsourcing on public displays , 2014, Conference on Designing Interactive Systems.

[42]  Jessica B. Hamrick,et al.  psiTurk: An open-source framework for conducting replicable behavioral experiments online , 2016, Behavior research methods.

[43]  K. Nakayama,et al.  Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments , 2012, Psychonomic Bulletin & Review.

[44]  Elizabeth Chang,et al.  An online statistical quality control framework for performance management in crowdsourcing , 2017, WI.

[45]  Joshua de Leeuw,et al.  jsPsych: A JavaScript library for creating behavioral experiments in a Web browser , 2014, Behavior Research Methods.

[46]  Kathryn M. McMillan,et al.  N‐back working memory paradigm: A meta‐analysis of normative functional neuroimaging studies , 2005, Human brain mapping.

[47]  Jeffrey R. Edwards,et al.  Person-job fit:: A conceptual integration, literature review, and methodological critique. , 1991 .