Meta-search, or the combination of the outputs of different search engines in response to a query, has been shown to improve performance. Since the scores produced by different search engines are not comparable, researchers have often decomposed the meta-search problem into a score normalization step followed by a combination step. Combination has been studied by many researchers. While appropriate normalization can affect performance, most of the normalization schemes suggested are ad hoc in nature.
In this paper, we propose a formal approach to normalizing scores for meta-search by taking the distributions of the scores into account. Recently, it has been shown that for search engines the score distributions for a given query may be modeled using an exponential distribution for the set of non-relevant documents and a normal distribution for the set of relevant documents. Here, it is shown that by equalizing the distributions of scores of the top non-relevant documents the best meta-search performance reported in the literature is obtained. Since relevance information is not available apri-ori, we discuss two different ways of obtaining a good approximation to the distribution of scores of non-relevant documents. One is obtained by looking at the distribution of scores of all documents. The second is obtained by fitting a mixture model of an exponential and a Gaussian to the scores of all documents and using the resulting exponential distribution as an estimate of the non-relevant distribution. We show with experiments on TREC-3, TREC-4 and TREC-9 data that the best combination results are obtained by averaging the parameters obtained from these approximations. These techniques work on a variety of different search engines including vector space search engines like SMART and probabilistic search engines like INQUERY.
The problem of normalization is important in many other areas including information filtering, topic detection and tracking, multilingual search and distributed retrieval. Thus, the techniques proposed here are likely to be applicable to many of these tasks.
[1]
R. Manmatha,et al.
Modeling score distributions for combining the outputs of search engines
,
2001,
SIGIR '01.
[2]
Edward A. Fox,et al.
Combination of Multiple Searches
,
1993,
TREC.
[3]
Jong-Hak Lee,et al.
Analyses of multiple evidence combination
,
1997,
SIGIR '97.
[4]
Jonathan G. Fiscus,et al.
Topic detection and tracking evaluation overview
,
2002
.
[5]
Garrison W. Cottrell,et al.
Predicting the performance of linearly combined IR systems
,
1998,
SIGIR '98.
[6]
Joon Ho Lee,et al.
Combining multiple evidence from different properties of weighting schemes
,
1995,
SIGIR '95.
[7]
R. Manmatha,et al.
Modeling Score Distributions for Meta Search
,
2002
.
[8]
James Allan,et al.
Topic detection and tracking: event-based information organization
,
2002
.
[9]
Javed A. Aslam,et al.
Relevance score normalization for metasearch
,
2001,
CIKM '01.
[10]
Javed A. Aslam,et al.
Bayes optimal metasearch: a probabilistic model for combining the results of multiple retrieval systems (poster session)
,
2000,
SIGIR '00.
[11]
Yi Zhang,et al.
Maximum likelihood estimation for filtering thresholds
,
2001,
SIGIR '01.
[12]
Heekuck Oh,et al.
Neural Networks for Pattern Recognition
,
1993,
Adv. Comput..
[13]
W. Bruce Croft.
Combining Approaches to Information Retrieval
,
2002
.
[14]
Avi Arampatzis,et al.
The score-distributional threshold optimization for adaptive binary classification tasks
,
2001,
SIGIR '01.
[15]
Javed A. Aslam,et al.
Models for metasearch
,
2001,
SIGIR '01.