In TREC-8 the emphasis was on each group's exploration of di erent approaches to supporting the common searcher task and understanding the reasons for the results they get. No formal coordination of hypotheses or comparison of systems across sites was planned, but groups were encouraged to seek out and exploit synergies. Some groups designed/tailored their systems to optimize performance on the task; others simply used the task to exercise their system(s). Groups from the following institutions took part: New Mexico State University at Las Cruces, Oregon Health Sciences University, Royal Melbourne Institute of Technology/CSIRO, Rutgers University, She eld University, the University of California at Berkeley, and the University of North Carolina at Chapel Hill. A total of 936 searches were performed as part of the experiments.
[1]
Ross Wilkinson,et al.
The RMIT/CSIRO Ad Hoc, Q&A, Web, Interactive, and Speech Experiments at TREC 8
,
1999,
TREC.
[2]
Nicholas J. Belkin,et al.
Relevance Feedback versus Local Context Analysis as Term Suggestion Devices: Rutgers' TREC-8 Interactive Track Experience
,
1999,
TREC.
[3]
Ray R. Larson.
Berkeley's TREC 8 Interactive Track Entry: Cheshire II and Zprise
,
1999,
TREC.
[4]
Micheline Hancock-Beaulieu,et al.
Interactive Okapi at Sheffield - TREC-8
,
1999,
TREC.
[5]
Kelly Maglaughlin,et al.
IRIS at TREC-8
,
1999,
TREC.
[6]
Andrew Turpin,et al.
Do batch and user evaluations give the same results?
,
2000,
SIGIR '00.
[7]
Ellen M. Voorhees,et al.
The Eighth Text REtrieval Conference (TREC-8)
,
2000
.