Crowdsourcing Document Relevance Assessment with Mechanical Turk
暂无分享,去创建一个
[1] Duncan J. Watts,et al. Financial incentives and the "performance of crowds" , 2009, HCOMP '09.
[2] James Allan,et al. Minimal test collections for retrieval evaluation , 2006, SIGIR.
[3] Brendan T. O'Connor,et al. Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.
[4] Omar Alonso,et al. Crowdsourcing for relevance evaluation , 2008, SIGF.
[5] Damon Horowitz,et al. The anatomy of a large-scale social search engine , 2010, WWW '10.
[6] J. Aslam,et al. A Practical Sampling Strategy for Efficient Retrieval Evaluation , 2007 .
[7] Javier R. Movellan,et al. Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise , 2009, NIPS.
[8] Thorsten Joachims,et al. Optimizing search engines using clickthrough data , 2002, KDD.
[9] Ellen M. Voorhees,et al. Retrieval evaluation with incomplete information , 2004, SIGIR '04.
[10] Ellen M. Voorhees,et al. The Philosophy of Information Retrieval Evaluation , 2001, CLEF.
[11] James Allan,et al. If I Had a Million Queries , 2009, ECIR.
[12] Manuel Blum,et al. reCAPTCHA: Human-Based Character Recognition via Web Security Measures , 2008, Science.