Overview of the INEX 2011 Question Answering Track (QA@INEX)

The INEX QA track aimed to evaluate complex question-answering tasks where answers are short texts generated from the Wikipedia by extraction of relevant short passages and aggregation into a coherent summary. In such a task, Question-answering, XML/passage retrieval and automatic summarization are combined in order to get closer to real information needs. Based on the groundwork carried out in 2009-2010 edition to determine the sub-tasks and a novel evaluation methodology, the 2011 edition experimented contextualizing tweets using a recent cleaned dump of the Wikipedia. Participants had to contextualize 132 tweets from the New York Times (NYT). Informativeness of answers has been evaluated, as well as their readability. 13 teams from 6 countries actively participated to this track. This tweet contextualization task will continue in 2012 as part of the CLEF INEX lab with same methodology and baseline but on a much wider range of tweet types.

[1]  Jean-Pierre Chanod,et al.  Robustness beyond shallowness: incremental deep parsing , 2002, Natural Language Engineering.

[2]  Eric SanJuan,et al.  Combining Language Models with NLP and Interactive Query Expansion , 2009, INEX.

[3]  Patrice Bellot,et al.  Overview of the 2009 QA Track: Towards a Common Task for QA, Focused IR and Automatic Summarization Systems , 2009, INEX.

[4]  Eric SanJuan,et al.  Multilingual Summarization Evaluation without Human Models , 2010, COLING.

[5]  W. Bruce Croft,et al.  Combining the language model and inference network approaches to retrieval , 2004, Inf. Process. Manag..

[6]  Ani Nenkova,et al.  Evaluating Content Selection in Summarization: The Pyramid Method , 2004, NAACL.

[7]  Andrew Trotman,et al.  Comparative Evaluation of Focused Retrieval , 2010, Lecture Notes in Computer Science.

[8]  Ani Nenkova,et al.  Performance Confidence Estimation for Automatic Summarization , 2009, EACL.

[9]  Andrew Trotman,et al.  Focused Retrieval and Evaluation, 8th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2009, Brisbane, Australia, December 7-9, 2009, Revised and Selected Papers , 2010, INEX.

[10]  Ani Nenkova,et al.  Automatic Evaluation of Linguistic Quality in Multi-Document Summarization , 2010, ACL.

[11]  Eric SanJuan,et al.  Textual Energy of Associative Memories: Performant Applications of Enertex Algorithm in Text Summarization and Topic Segmentation , 2007, MICAI.

[12]  Jianhua Hou,et al.  The structure and dynamics of cocitation clusters: A multiple-perspective cocitation analysis , 2010, J. Assoc. Inf. Sci. Technol..

[13]  Patrice Bellot,et al.  Overview of the INEX 2010 Question Answering Track (QA@INEX) , 2010, INEX.