A task assignment strategy for crowdsourcing-based web accessibility evaluation system

Web accessibility evaluation aims to find the interactive barrier for people with disabilities in accessing the contents on the Web. As some of the checkpoints require human inspection for conformance evaluation, evaluating a website will usually incur an expensive cost. To address this issue, crowdsourcing-based system is used in web accessibility evaluation to elicit contributions from volunteer participants. However, some of accessibility evaluation tasks are complicated and require a certain level of expertise in evaluation. This makes the task assignment in crowdsourcing a challenging problem in that poor evaluation accuracy will be resulted when complicated tasks are assigned to inexperienced participants. To address this issue, we propose in this paper a novel task assignment strategy called Evaluator-Decision-Based Assignment (EDBA) to better leverage the participation and expertise of the volunteers. Using evaluators' historical evaluation records and experts' review, we train a minimum cost model via machine learning methods to obtain an optimal task assignment map. Experiments on Chinese Web Accessibility Evaluation System show that our method achieves high accuracy in website accessibility evaluation. Meanwhile, the balanced assignments from EDBA also enable both novices and old hands effective participation in accessibility evaluation.

[1]  Sihem Amer-Yahia,et al.  Task assignment optimization in knowledge-intensive crowdsourcing , 2015, The VLDB Journal.

[2]  Chun Chen,et al.  An optimal sampling method for web accessibility quantitative metric , 2015, W4A.

[3]  Devavrat Shah,et al.  Iterative Learning for Reliable Crowdsourcing Systems , 2011, NIPS.

[4]  Luís Carriço,et al.  On web accessibility evaluation environments , 2011, W4A.

[5]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.

[6]  Markel Vigo,et al.  Benchmarking web accessibility evaluation tools: measuring the harm of sole reliance on automated tests , 2013, W4A.

[7]  Kwong-Sak Leung,et al.  Task Matching in Crowdsourcing , 2011, 2011 International Conference on Internet of Things and 4th International Conference on Cyber, Physical and Social Computing.

[8]  H. Kuhn The Hungarian method for the assignment problem , 1955 .

[9]  Hironobu Takagi,et al.  Crowdsourcing platform for workplace accessibility , 2013, W4A.

[10]  Chien-Ju Ho,et al.  Online Task Assignment in Crowdsourcing Markets , 2012, AAAI.

[11]  Chris Callison-Burch,et al.  Shared task: crowdsourced accessibility elicitation of Wikipedia articles , 2010, HLT-NAACL 2010.

[12]  Bin Bi,et al.  Iterative Learning for Reliable Crowdsourcing Systems , 2012 .

[13]  Yeliz Yesilada,et al.  The Expertise Effect on Web Accessibility Evaluation Methods , 2011, Hum. Comput. Interact..

[14]  Harold W. Kuhn,et al.  The Hungarian method for the assignment problem , 1955, 50 Years of Integer Programming.

[15]  Shadi Abou-Zahra,et al.  Web Accessibility Evaluation , 2008, Web Accessibility.

[16]  Ricardo Herrmann,et al.  A crowdsourcing platform for the construction of accessibility maps , 2013, W4A.