Skill Ontology-Based Model for Quality Assurance in Crowdsourcing

Crowdsourcing continues to gain more momentum as its potential becomes more recognized. Nevertheless, the associated quality aspect remains a valid concern, which introduces uncertainty in the results obtained from the crowd. We identify the different aspects that dynamically affect the overall quality of a crowdsourcing task. Accordingly, we propose a skill ontology-based model that caters for these aspects, as a management technique to be adopted by crowdsourcing platforms. The model maintains a dynamically evolving ontology of skills, with libraries of standardized and personalized assessments for awarding workers skills. Aligning a worker’s set of skills to that required by a task, boosts the ultimate resulting quality. We visualize the model’s components and workflow, and consider how to guard it against malicious or unqualified workers, whose responses introduce this uncertainty and degrade the overall quality.

[1]  Gabriella Kazai,et al.  In Search of Quality in Crowdsourcing for Search Engine Evaluation , 2011, ECIR.

[2]  Daren C. Brabham MOVING THE CROWD AT THREADLESS , 2010 .

[3]  Eckhard Klieme,et al.  Current Issues in Competence Modeling and Assessment , 2008 .

[4]  Ronald A. Ash,et al.  THE PRACTICE OF COMPETENCY MODELING , 2000 .

[5]  Gerardo Hermosillo,et al.  Learning From Crowds , 2010, J. Mach. Learn. Res..

[6]  Daniel Schall,et al.  Service-Oriented Crowdsourcing , 2012, SpringerBriefs in Computer Science.

[7]  Asit Dan,et al.  A Service Level Agreement Language for Dynamic Electronic Services , 2003, Electron. Commer. Res..

[8]  Akhil Sahai,et al.  Towards Automated SLA Management for Web Services , 2002 .

[9]  Masataka Goto,et al.  Podcastle: collaborative training of acoustic models on the basis of wisdom of crowds for podcast transcription , 2009, INTERSPEECH.

[10]  Michael Vitale,et al.  The Wisdom of Crowds , 2015, Cell.

[11]  Javier R. Movellan,et al.  Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise , 2009, NIPS.

[12]  Ben Carterette,et al.  An Analysis of Assessor Behavior in Crowdsourced Preference Judgments , 2010 .

[13]  Mihaela Ulieru,et al.  The State of the Art in Trust and Reputation Systems: A Framework for Comparison , 2010, J. Theor. Appl. Electron. Commer. Res..

[14]  Aleksandar Ignjatovic,et al.  An Analytic Approach to Reputation Ranking of Participants in Online Transactions , 2008, 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology.

[15]  Francesco M. Donini,et al.  A Formal Approach to Ontology-Based Semantic Match of Skills Descriptions , 2003, J. Univers. Comput. Sci..

[16]  Carol Peters,et al.  Report on the SIGIR 2009 workshop on the future of IR evaluation , 2009, SIGF.

[17]  Daniel Schall,et al.  Service-Oriented Crowdsourcing: Architecture, Protocols and Algorithms , 2012 .

[18]  A. P. Dawid,et al.  Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm , 1979 .

[19]  Elisa Bertino,et al.  Reputation management in crowdsourcing systems , 2012, 8th International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom).

[20]  Daren C. Brabham Crowdsourcing as a Model for Problem Solving , 2008 .

[21]  B. Clifford Neuman,et al.  Endorsements, licensing, and insurance for distributed system services , 1994, CCS '94.

[22]  Robert P. W. Duin,et al.  Limits on the majority vote accuracy in classifier fusion , 2003, Pattern Analysis & Applications.

[23]  Christoph Lofi,et al.  A Model for Competence Gap Analysis , 2007, WEBIST.

[24]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[25]  Brian J. Ruggeberg,et al.  DOING COMPETENCIES WELL: BEST PRACTICES IN COMPETENCY MODELING , 2011 .

[26]  Masataka Goto,et al.  PodCastle: Recent Advances of a Spoken Document Retrieval Service Improved by Anonymous User Contributions , 2011, INTERSPEECH.

[27]  Haixun Wang,et al.  Automatic taxonomy construction from keywords , 2012, KDD.

[28]  Wolf-Tilo Balke,et al.  Information Extraction Meets Crowdsourcing: A Promising Couple , 2012, Datenbank-Spektrum.

[29]  Elisa Bertino,et al.  Quality Control in Crowdsourcing Systems: Issues and Directions , 2013, IEEE Internet Computing.

[30]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.

[31]  Qiang Liu,et al.  Scoring Workers in Crowdsourcing: How Many Control Questions are Enough? , 2013, NIPS.