Overview of WebCLEF 2008

We describe the WebCLEF 2008 task. Similarly to the 2007 edition of WebCLEF, the 2008 edition implements a multilingual "information synthesis" task, where, for a given topic, participating systems have to extract important snippets from web pages. We detail the task, the assessment procedure, the evaluation measures and results.

[1]  Anastasio Tombros,et al.  Comparative Evaluation of XML Information Retrieval Systems, 5th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2006, Dagstuhl Castle, Germany, December 17-20, 2006, Revised and Selected Papers , 2007, INEX.

[2]  Juan Martínez-Romo,et al.  UNED at WebCLEF 2008: Applying High Restrictive Summarization, Low Restrictive Information Retrieval and Multilingual Techniques , 2008, CLEF.

[3]  Carol Peters,et al.  Evaluating Systems for Multilingual and Multimodal Information Access, 9th Workshop of the Cross-Language Evaluation Forum, CLEF 2008, Aarhus, Denmark, September 17-19, 2008, Revised Selected Papers , 2009, CLEF.

[4]  Valentin Jijkoun,et al.  Overview of WebCLEF 2007 , 2008, CLEF.

[5]  Andrew Trotman,et al.  Comparative Evaluation of XML Information Retrieval Systems: 5th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2006 Dagstuhl Castle, Germany, December 17-20, 2006 Revised and Selected Papers , 2005 .

[6]  Carol Peters,et al.  Advances in Multilingual and Multimodal Information Retrieval, 8th Workshop of the Cross-Language Evaluation Forum, CLEF 2007, Budapest, Hungary, September 19-21, 2007, Revised Selected Papers , 2008, CLEF.

[7]  Ángel F. Zazo Rodríguez,et al.  Retrieval of Snippets of Web Pages Converted to Plain Text. More Questions Than Answers , 2008, CLEF.

[8]  Nick Craswell,et al.  Overview of the TREC 2006 Enterprise Track , 2006, TREC.

[9]  Djoerd Hiemstra,et al.  On the Evaluation of Snippet Selection for WebCLEF , 2008, CLEF.

[10]  Chin-Yew Lin,et al.  ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.