The TREC-9 Interactive Track Report

The TREC Interactive Track has the goal of investigating interactive information retrieval by examining the process as well as the results. In TREC-9 six research groups ran a total of 12 interactive information retrieval (IR) system variants on a shared problem: a factnding task, eight questions, and newspaper/newswire documents from the TREC collections. This report summarizes the shared experimental framework, which for TREC-9 was designed to support analysis and comparison of system performance only within sites. The report refers the reader to separate discussions of the experiments performed by each participating group | their hypotheses, experimental systems, and results. The papers from each of the participating groups and the raw and evaluated results are available via the TREC home page (trec.nist.gov).

[1]  Ellen M. Voorhees,et al.  The seventh text REtrieval conference (TREC-7) , 1999 .

[2]  Lokman I. Meho,et al.  IRIS at TREC-7 , 1998, TREC.

[3]  Ellen M. Voorhees,et al.  The Eighth Text REtrieval Conference (TREC-8) , 2000 .

[4]  Andrew Turpin,et al.  Do batch and user evaluations give the same results? , 2000, SIGIR '00.

[5]  Andrew Turpin,et al.  Further Analysis of Whether Batch and User Evaluations Give the Same Results with a Question-Answering Task , 2000, TREC.

[6]  Christopher C. Vogt Passive Feedback Collection--An Attempt to Debunk the Myth of Clickthroughs , 2000, TREC.

[7]  Nicholas J. Belkin,et al.  Relevance Feedback versus Local Context Analysis as Term Suggestion Devices: Rutgers' TREC-8 Interactive Track Experience , 1999, TREC.

[8]  V. Barnett,et al.  Applied Linear Statistical Models , 1975 .

[9]  Paul Over,et al.  Comparing interactive information retrieval systems across sites: the TREC-6 interactive track matrix experiment , 1998, SIGIR '98.

[10]  Ross Wilkinson,et al.  The RMIT/CSIRO Ad Hoc, Q&A, Web, Interactive, and Speech Experiments at TREC 8 , 1999, TREC.

[11]  Karen Sparck Jones,et al.  Okapi at TREC{7: automatic ad hoc, ltering, VLC and interactive track , 1999 .

[12]  Ellen M. Voorhees,et al.  The Ninth Text REtrieval Conference (TREC-9) , 2001 .

[13]  Fredric C. Gey,et al.  Manual Queries and Machine Translation in Cross-Language Retrieval and Interactive Retrieval with Cheshire II at TREC-7 , 1998, TREC.

[14]  Daryl J. D'Souza,et al.  Melbourne TREC-9 Experiments , 2000, TREC.

[15]  Paul Over,et al.  SIGIR workshop on interactive retrieval at TREC and beyond , 2000, SIGF.

[16]  Nicholas J. Belkin,et al.  Rutgers' TREC-6 Interactive Track Experience , 1997, TREC.

[17]  Ray R. Larson Berkeley's TREC 8 Interactive Track Entry: Cheshire II and Zprise , 1999, TREC.

[18]  Mark H. Chignell,et al.  ClickIR: Text Retrieval using a Dynamic Hypertext Interface , 1998, TREC.

[19]  Joemon M. Jose,et al.  Question Answering, Relevance Feedback and Summarisation: TREC-9 Interactive Track Report , 2000, TREC.

[20]  William R. Hersh,et al.  A Large-Scale Comparison of Boolean vs. Natural Language Searching for the TREC-7 Interactive Track , 1998, TREC.

[21]  Hideo Joho,et al.  Sheffield Interactive Experiment at TREC-9 , 2000, TREC.

[22]  Nicholas J. Belkin,et al.  Support for Question-Answering in Interactive Information Retrieval: Rutgers' TREC-9 Interactive Track Experience , 2000, TREC.