City submitted two runs each for the automatic ad hoc, very large collection track, automatic routing and Chinese track; and took part in the interactive and filtering tracks. The method used was : expansion using terms from the top documents retrieved by a pilot search on topic terms. Additional runs seem to show that we would have done better without expansion. Twor runs using the method of city96al were also submitted for the Very Large Collection track. The training database and its relevant documents were partitioned into three parts. Working on a pool of terms extracted from the relevant documents for one partition, an iterative procedure added or removed terms and/or varied their weights. After each change in query content or term weights a score was calculated by using the current query to search a second protion of the training database and evaluating the results against the corresponding set of relevant documents. Methods were compared by evaluating queries predictively against the third training partition. Queries from different methods were then merged and the results evaluated in the same way. Two runs were submitted, one based on character searching and the other on words or phrases. Much of the work involved investigating plausible methods of applying Okapi-style weighting to phrases
[1]
Micheline Hancock-Beaulieu,et al.
An Evaluation of Automatic Query Expansion in an Online Library Catalogue
,
1992,
J. Documentation.
[2]
Stephen E. Robertson,et al.
Relevance weighting of search terms
,
1976,
J. Am. Soc. Inf. Sci..
[3]
Stephen E. Robertson,et al.
Probabilistic models of indexing and searching
,
1980,
SIGIR '80.
[4]
E. Michael Keen,et al.
The Use of Term position Devices in Ranked output Experiments
,
1991,
J. Documentation.