Understanding Search Engines : Requirements for Explaining Search Results

There are three different groups that use Commercial Web Search Engines: the Developers, the Evaluators and the EndUsers. Each group has different information needs and applies different criteria when examining the retrieved documents. Most Search Engines attempt to measure retrieval performance providing figures of recall and precision, indicative of the quantity of the obtained information but saying too little about the quality of this information. In this paper we present a survey of the requirements that each user group has and we propose a generic framework, independent of the details of the underlying Search Engine. Our aim is to provide users with explanation utilities regarding qualitative information of the returned documents. This study’s motivation emerged from real life experience acquired during the development of a Web Search Engine for Greek and our purpose is to explain the most usual difficulties in user understanding of Search Engines’ operation.

[1]  Ellen M. Voorhees,et al.  Variations in relevance judgments and the measurement of retrieval effectiveness , 1998, SIGIR '98.

[2]  Linda Schamber Relevance and Information Behavior. , 1994 .

[3]  Philippe Mulhem,et al.  Interactive information retrieval systems: from user centered interface design to software design , 1996, SIGIR '96.

[4]  William R. Hersh,et al.  Towards new measures of information retrieval evaluation , 1995, SIGIR '95.

[5]  F. W. Lancaster,et al.  Information retrieval: on-line , 1973 .

[6]  Donna K. Harman,et al.  The TREC Conferences , 1997, HIM.

[7]  Susan T. Dumais,et al.  The vocabulary problem in human-system communication , 1987, CACM.

[8]  Mirja Iivonen,et al.  Searchers and searchers: differences between the most and least consistent searches , 1995, SIGIR '95.

[9]  Diane H. Sonnenwald,et al.  Developing a theory to guide the process of designing information retrieval systems , 1992, SIGIR '92.

[10]  Ragnar Nordlie,et al.  “User revealment”—a comparison of initial queries and ensuing question development in online searching and in human reference interactions , 1999, SIGIR '99.

[11]  Mary Beth Rosson,et al.  Paradox of the active user , 1987 .

[12]  Stefano Mizzaro,et al.  Evaluating user interfaces to information retrieval systems: a case study on user support , 1996, SIGIR '96.

[13]  Linda Schamber,et al.  User Criteria in Relevance Evaluation: Toward Development of a Measurement Scale. , 1996 .

[14]  Peter Ingwersen,et al.  Polyrepresentation of information needs and semantic entities: elements of a cognitive theory for information retrieval interaction , 1994, SIGIR '94.

[15]  Charles T. Meadow,et al.  Text information retrieval systems , 1992 .

[16]  Peter Ingwersen,et al.  Measures of relative relevance and ranked half-life: performance indicators for interactive IR , 1998, SIGIR '98.