Beyond Independent Agreement: A Tournament Selection Approach for Quality Assurance of Human Computation Tasks
暂无分享,去创建一个
[1] Stephan Vogel,et al. Can Crowds Build parallel corpora for Machine Translation Systems? , 2010, Mturk@HLT-NAACL.
[2] Valeria Bertacco,et al. Human computing for EDA , 2009, 2009 46th ACM/IEEE Design Automation Conference.
[3] Aniket Kittur,et al. Crowdsourcing user studies with Mechanical Turk , 2008, CHI.
[4] Gabriella Kazai,et al. On the Evaluation of the Quality of Relevance Assessments Collected through Crowdsourcing , 2009 .
[5] Qi Su,et al. Internet-scale collection of human-reviewed data , 2007, WWW '07.
[6] Chris Callison-Burch,et al. Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon’s Mechanical Turk , 2009, EMNLP.
[7] John Le,et al. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution , 2010 .
[8] Brendan T. O'Connor,et al. Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.
[9] Panagiotis G. Ipeirotis,et al. Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.
[10] David A. Forsyth,et al. Utility data annotation with Amazon Mechanical Turk , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.
[11] Yu-An Sun,et al. Human OCR: Insights from a Complex Human Computation Process , 2011 .