Measuring Search Engine Quality

The effectiveness of twenty public search engines is evaluated using TREC-inspired methods and a set of 54 queries taken from real Web search logs. The World Wide Web is taken as the test collection and a combination of crawler and text retrieval system is evaluated. The engines are compared on a range of measures derivable from binary relevance judgments of the first seven live results returned. Statistical testing reveals a significant difference between engines and high intercorrelations between measures. Surprisingly, given the dynamic nature of the Web and the time elapsed, there is also a high correlation between results of this study and a previous study by Gordon and Pathak. For nearly all engines, there is a gradual decline in precision at increasing cutoff after some initial fluctuation. Performance of the engines as a group is found to be inferior to the group of participants in the TREC-8 Large Web task, although the best engines approach the median of those systems. Shortcomings of current Web search evaluation methodology are identified and recommendations are made for future improvements. In particular, the present study and its predecessors deal with queries which are assumed to derive from a need to find a selection of documents relevant to a topic. By contrast, real Web search reflects a range of other information need types which require different judging and different measures.

[1]  Michael E. Lesk,et al.  Computer Evaluation of Indexing and Text Processing , 1968, JACM.

[2]  M. E. Maron,et al.  An evaluation of retrieval effectiveness for a full-text document-retrieval system , 1985, CACM.

[3]  Michael Eisenberg,et al.  Order effects: A study of the possible influence of presentation order on user judgments of document relevance , 1988, J. Am. Soc. Inf. Sci..

[4]  Donna K. Harman,et al.  Overview of the Fifth Text REtrieval Conference (TREC-5) , 1996, TREC.

[5]  Gary Marchionini,et al.  A Comparative Study of Web Search Service Performance , 1996 .

[6]  Cyril Cleverdon,et al.  The Cranfield tests on index language devices , 1997 .

[7]  David Hawking,et al.  Overview of TREC-7 Very Large Collection Track , 1997, TREC.

[8]  Ellen M. Voorhees,et al.  Variations in relevance judgments and the measurement of retrieval effectiveness , 1998, SIGIR '98.

[9]  M. Koster The web robots pages , 1999 .

[10]  Monika Henzinger,et al.  Analysis of a very large web search engine query log , 1999, SIGF.

[11]  Peter Bailey,et al.  Overview of the TREC-8 Web Track , 2000, TREC.

[12]  C. Lee Giles,et al.  Accessibility of information on the web , 1999, Nature.

[13]  Jaideep Srivastava,et al.  First 20 precision among World Wide Web search services (search engines) , 1999 .

[14]  Robert M. Losee,et al.  Measuring search-engine quality and query difficulty: ranking with Target and Freestyle , 1999 .

[15]  Michael D. Gordon,et al.  Finding Information on the World Wide Web: The Retrieval Effectiveness of Search Engines , 1999, Inf. Process. Manag..

[16]  C. Lee Giles,et al.  Accessibility of information on the Web , 2000, INTL.

[17]  Donna K. Harman,et al.  Scaling Up the TREC Collection , 1999, Information Retrieval.