A Weighted Aggregation Rule in Crowdsourcing Systems for High Result Accuracy
暂无分享,去创建一个
Ge Yu | Derong Shen | Dejun Yue | Xiaocong Yu | Ge Yu | Xiaocong Yu | D. Yue | Derong Shen
[1] Aditya G. Parameswaran,et al. Evaluating the crowd with confidence , 2013, KDD.
[2] Qiang Yang,et al. Cross-task crowdsourcing , 2013, KDD.
[3] Lei Chen,et al. Whom to Ask? Jury Selection for Decision Making Tasks on Micro-blog Services , 2012, Proc. VLDB Endow..
[4] Yuandong Tian,et al. Learning from crowds in the presence of schools of thought , 2012, KDD.
[5] Gang Chen,et al. An online cost sensitive decision-making method in crowdsourcing systems , 2013, SIGMOD '13.
[6] Shipeng Yu,et al. Eliminating Spammers and Ranking Annotators for Crowdsourced Labeling Tasks , 2012, J. Mach. Learn. Res..
[7] Stefanie Nowak,et al. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation , 2010, MIR '10.
[8] Brendan T. O'Connor,et al. Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.
[9] Tim Kraska,et al. CrowdER: Crowdsourcing Entity Resolution , 2012, Proc. VLDB Endow..
[10] Shmuel Nitzan,et al. Optimal Decision Rules in Uncertain Dichotomous Choice Situations , 1982 .
[11] Cyrus Rashtchian,et al. Collecting Image Annotations Using Amazon’s Mechanical Turk , 2010, Mturk@HLT-NAACL.