The goal of the Text Retrieval Conference (TREC) is to provide a setting for large-scale testing of text retrieval technology (Voorhees & Harman, 2000). TREC is organized as a workshop series that is based on realistic test collections, uniform and appropriate evaluation procedures, and a forum for the exchange of research ideas and discussion of research methodology (see trec. nist.gov). Most of the research carried out within TREC has involved testing information retrieval (IR) systems in a fully automatic setting. But work on IR systems collaborating with human searchers, interactive searching, has been part of TREC in various forms from the beginning. This special issue brings together examples of recent interactive studies, often multi-year sequences, carried out as part of TREC and/or separately using the TREC test collections. Most experiments have been carried out as part of the TREC Interactive Track, to which an annotated bibliography is included (Over, 2001). The papers re ̄ect an interest in the process of interactive searching as well as results in the observation, measurement, and evaluation of a human searcher interacting with a search system and data ± as seen from multiple perspectives simultaneously. All of them emanate to some degree from the instance recall task that was used as a common task by the Interactive Track from TREC-6 through TREC-8. In this task, the goal for the user was to identify as many instances (called aspects in original TREC-6 papers) for a speci®c topic. In this task, the user is given a description of some needed information (a topic). The user's goal is to ®nd as many distinct instances of the information described by the topic as possible in the allotted time. In essence the topic poses a question to which there are multiple answers and the user's job is to ®nd as many dierent answers as possible. Examples of needed information include discoveries of the Hubble telescope and names of countries importing Cuban sugar. The relative stability of the instance recall framework provided the opportunity to investigate a set of related problems and solutions, with each year's experiment/system building on the previous year's results. Some groups tried to adapt their systems to the speci®c task set; most did not. Instance retrieval presented special problems to old and new approaches, since it called for a search for answers to a question, for which there were multiple unique answers ± independent of how the answers were distributed within and across documents. Once an answer was found, ®nding/displaying/saving duplicates was eort wasted since overall search time was limited and duplicate answers did not aect the eectiveness score for the search. Although the various participating groups performed their research using a common task, they asked a wide diversity of research questions and used markedly dierent retrieval systems to answer them. Two groups looked at clustering to provide the searcher with more information than standard ordered lists of documents. Allan, Leuski, Swan, and Byrd (2001) looked at how ideas from document clustering could be used to improve retrieval accuracy of ranked lists by Information Processing and Management 37 (2001) 365±367 www.elsevier.com/locate/infoproman
[1]
Nipon Charoenkitkarn,et al.
The impact of text browsing on text retrieval performance
,
2001,
Inf. Process. Manag..
[2]
Paul Over,et al.
The TREC interactive track: an annotated bibliography
,
2001,
Inf. Process. Manag..
[3]
Andrew Turpin,et al.
Challenging conventional assumptions of automated information retrieval with real users: Boolean searching and batch retrieval evaluations
,
2001,
Inf. Process. Manag..
[4]
Donna K. Harman,et al.
Overview of the Sixth Text REtrieval Conference (TREC-6)
,
1997,
Inf. Process. Manag..
[5]
Ross Wilkinson,et al.
Using clustering and classification approaches in interactive retrieval
,
2001,
Inf. Process. Manag..
[6]
James Allan,et al.
Evaluating combinations of ranked lists and visualizations of inter-document similarity
,
2001,
Inf. Process. Manag..
[7]
Nicholas J. Belkin,et al.
Iterative exploration, design and evaluation of support for query reformulation in interactive information retrieval
,
2001,
Inf. Process. Manag..
[8]
Ray R. Larson,et al.
TREC interactive with Cheshire II
,
2001,
Inf. Process. Manag..
[9]
Kelly Maglaughlin,et al.
Passage feedback with IRIS
,
2001,
Inf. Process. Manag..