Crowdsourcing for information retrieval

The 2nd SIGIR Workshop on Crowdsourcing for Information Retrieval (CIR 2011) was held on July 28, 2011 in Beijing, China, in conjunction with the 34th Annual ACM SIGIR Conference1. The workshop brought together researchers and practitioners to disseminate recent advances in theory, empirical methods, and novel applications of crowdsourcing for information retrieval (IR). The workshop program included three invited talks, a panel discussion entitled Beyond the Lab: State-of-the-Art and Open Challenges in Practical Crowdsourcing, and presentation of nine refereed research papers and one demonstration paper. A Best Paper Award, sponored by Microsoft Bing, was awarded to Jun Wang and Bei Yu for their paper entitled Labeling Images with Queries: A Recall-based Image Retrieval Game Approach. A Crowdsourcing Challenge contest was also announced prior to the workshop, sponsored by CrowdFlower. The contest offered both seed funding and advanced technical support for the winner to use CrowdFlower's services for innovative work. Workshop organizers selected Mark Smucker as the winner based on his proposal entitled: The Crowd vs. the Lab: A Comparison of Crowd-Sourced and University Laboratory Participant Behavior. Proceedings of the workshop are available online2 [15].

[1]  Jonathan L. Elsas,et al.  Ancestry.com Online Forum Test Collection , 2011 .

[2]  Matthew Lease,et al.  Look before you leap: Legal pitfalls of crowdsourcing , 2011, ASIST.

[3]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[4]  Vikas Kumar,et al.  CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones , 2010, MobiSys '10.

[5]  Matthew Lease,et al.  Crowdsourcing for information retrieval: principles, methods, and applications , 2011, SIGIR.

[6]  Jeroen B. P. Vuurens,et al.  How Much Spam Can You Take? An Analysis of Crowdsourcing Results to Increase Accuracy , 2011 .

[7]  Matthew Lease,et al.  Semi-Supervised Consensus Labeling for Crowdsourcing , 2011 .

[8]  Cyril Cleverdon,et al.  The Cranfield tests on index language devices , 1997 .

[9]  David Vallet Crowdsourced Evaluation of Personalization and Diversi- fication Techniques in Web Search , 2011 .

[10]  Padmini Srinivasan,et al.  GEAnn - Games for Engaging Annotations , 2011 .

[11]  Matthew Lease,et al.  Crowdsourcing 101: putting the WSDM of crowds to work for you , 2011, WSDM '11.

[12]  Matthew Lease,et al.  Crowdsourcing for search evaluation , 2011, SIGF.

[13]  Matthew Lease,et al.  Crowdsourcing for search and data mining , 2011, WSDM '11.

[14]  Bei Yu,et al.  Labeling Images with Queries: A Recall-based Image Retrieval Game Approach , 2011 .

[15]  James Allan,et al.  Minimal test collections for retrieval evaluation , 2006, SIGIR.

[16]  Emine Yilmaz,et al.  A statistical method for system evaluation using incomplete judgments , 2006, SIGIR.

[17]  Tie-Yan Liu,et al.  Learning to rank for information retrieval , 2009, SIGIR.

[18]  Mark D. Smucker,et al.  The Crowd vs . the Lab : A Comparison of Crowd-Sourced and University Laboratory Participant Behavior , 2011 .

[19]  Patrick Schone,et al.  Genealogical Search Analysis Using Crowd Sourcing , 2012 .

[20]  Zhang Chuang,et al.  Quality Control of Crowdsourcing through Workers Expe- rience , 2011 .

[21]  Emine Yilmaz,et al.  A simple and efficient sampling method for estimating AP and NDCG , 2008, SIGIR '08.

[22]  Peter Norvig,et al.  The Unreasonable Effectiveness of Data , 2009, IEEE Intelligent Systems.

[23]  and software — performance evaluation , .