Overview of the Third NTCIR Workshop

This paper introduces the third NTCIR Workshop, which is the latest in a series of evaluation workshops designed to enhance research in information access technologies, including information retrieval, automatic text summarization, question answering, etc., by providing large-scale test collections and a forum for researchers. In the third Workshop, document collections were diversified in the aspects of length, genres, and languages. The focus of evaluation was also diversified from document-level retrieval to processing on units smaller than document and technologies supporting users to utilize information in the documents. The purpose of this paper is to serve as an introduction to the research described in detail in the rest of this volume.

[1]  Donna K. Harman,et al.  The Development and Evolution of TREC and DUC , 2002, NTCIR.

[2]  Noriko Kando,et al.  NTCIR workshop : proceedings of the first NTCIR workshop on research in Japanese text retrieval and term recognition , 1999 .

[3]  Manabu Okumura,et al.  Text Summarization Challenge 2 text summarization evaluation at NTCIR workshop 3 , 2004, SIGF.

[4]  Carol Peters,et al.  Multilingual information discovery and access (MIDAS) , 1999, DL '99.

[5]  Tsuneaki Kato,et al.  An Overview of Question and Answering Challenge (QAC) of the Next NTCIR Workshop , 2001, NTCIR.

[6]  Linda Schamber Relevance and Information Behavior. , 1994 .

[7]  Jaana Kekäläinen,et al.  IR evaluation methods for retrieving highly relevant documents , 2000, SIGIR '00.

[8]  Ellen M. Voorhees,et al.  Overview of the TREC-9 Question Answering Track , 2000, TREC.

[9]  Cyril Cleverdon,et al.  The Cranfield tests on index language devices , 1997 .

[10]  Noriko Kando,et al.  Pooling for a Large-Scale Test Collection: An Analysis of the Search Results from the First NTCIR Workshop , 2004, Information Retrieval.

[11]  Hsin-Hsi Chen,et al.  Overview of CLIR Task at the Third NTCIR Workshop , 2002, NTCIR.

[12]  Noriko Kando,et al.  Analysis of the Usage of Japanese Segmented Texts in NTCIR Workshop 2 , 2001, NTCIR.

[13]  Amanda Spink,et al.  Regions and levels: Measuring and mapping users' relevance judgments , 2001, J. Assoc. Inf. Sci. Technol..

[14]  C. J. van Rijsbergen,et al.  Report on the need for and provision of an 'ideal' information retrieval test collection , 1975 .

[15]  Hideo Itoh,et al.  Term Distillation for Cross-DB Retrieval , 2002, NTCIR.

[16]  Stefano Mizzaro,et al.  Relevance: The Whole History , 1997, J. Am. Soc. Inf. Sci..

[17]  Hsin-Hsi Chen,et al.  Overview of CLIR Task at the Fourth NTCIR Workshop , 2004, NTCIR.

[18]  Gerard Salton,et al.  The SMART Retrieval System—Experiments in Automatic Document Processing , 1971 .

[19]  Ellen M. Voorhees,et al.  Variations in relevance judgments and the measurement of retrieval effectiveness , 1998, SIGIR '98.

[20]  Manabu Okumura,et al.  Text summarization challenge 2: text summarization evaluation at NTCIR workshop 3 , 2001, HLT-NAACL 2003.

[21]  Noriko Kando,et al.  Overview of the Web Retrieval Task at the Third NTCIR Workshop , 2003, NTCIR.

[22]  Ellen M. Voorhees,et al.  Building a question answering test collection , 2000, SIGIR '00.

[23]  江口 浩二,et al.  NTCIR workshop 2 : proceedings of the Second NTCIR workshop on research in Chinese & Japanese text retrieval and text summarization , 2001 .

[24]  Noriko Kando,et al.  Construction of a Large Scale Test Collection : Analysis of the Training Topics of the NTCIR - 1 , 1999 .

[25]  Noriko Kando,et al.  An empirical study on retrieval models for different document genres: patents and newspaper articles , 2003, SIGIR '03.