You're Hired! An Examination of Crowdsourcing Incentive Models in Human Resource Tasks
暂无分享,去创建一个
[1] Brendan T. O'Connor,et al. Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.
[2] Mark S. Ackerman,et al. Crowdsourcing and knowledge sharing: strategic user behavior on taskcn , 2008, EC '08.
[3] Aniket Kittur,et al. Crowdsourcing user studies with Mechanical Turk , 2008, CHI.
[4] Omar Alonso,et al. Crowdsourcing for relevance evaluation , 2008, SIGF.
[5] Vikas Sindhwani,et al. Data Quality from Crowdsourcing: A Study of Annotation Selection Criteria , 2009, HLT-NAACL 2009.
[6] Jaime G. Carbonell,et al. A Probabilistic Framework to Learn from Multiple Annotators with Time-Varying Accuracy , 2010, SDM.
[7] Matteo Negri,et al. Creating a Bi-lingual Entailment Corpus through Translations with Mechanical Turk: $100 for a 10-day Rush , 2010, Mturk@HLT-NAACL.
[8] Cyrus Rashtchian,et al. Collecting Image Annotations Using Amazon’s Mechanical Turk , 2010, Mturk@HLT-NAACL.
[9] Patrick Gage Kelley. Conducting Usable Privacy & Security Studies with Amazon ’ s Mechanical Turk , 2010 .