Introduction Information retrieval (IR) is a discipline concerned with the processes by which queries presented to information systems are matched against a "store" of texts (the term text may be substituted with still images, sounds, video clips, paintings, or any other artifact of intellectual activity). The end result of the matching process is a listing of texts that are a subset of the total store. Any number of means may accomplish the matching process, but essentially, when specified attributes in a query are found to correspond with specified attributes of a text, the text is included in the listing. Since the middle of the 20th century, most efforts to improve information retrieval have focused on methods of matching text representations with query representations. Recently, however, researchers have undertaken the task of understanding the human, or user, role in IR. The basic assumption behind these efforts is that we cannot design effective IR systems without some knowledge of how users interact with them. Therefore, this line of research that studies users in the process of directly consulting an IR system is called interactive information retrieval (IIR). In order to understand the context in which IIR has developed, I will give a brief background on traditional IR studies. I will follow this with a description of current models of IIR, and then conclude with a discussion of new directions in IIR. Background: The System Approach The system approach to IR has grown out of concerns with the "library problem" (e.g., Maron & Kuhns, 1960, p. 217), the problem of searching and retrieving relevant documents from IR systems. The hardware and software problems associated with document retrieval and document representation still persist. The development of digitally based IR systems requires computer programs that match requests with stores of documents, and then produce output. In sophisticated systems of this sort, both input terms, and output text, may be ranked according to preset criteria. The challenge to researchers in this area is to develop algorithms that optimize such rankings. There are, however, difficulties with the system orientation to IR. The first problem with the system view is in how IR systems are evaluated. In the system approach to information retrieval, system effectiveness is calculated by two measures: recall and precision. For any given search on a given database, recall is the ratio of the number of relevant documents retrieved to relevant documents in the database. Precision is the ratio of the number of relevant documents retrieved to the number of documents retrieved. These measurements rest on the assumptions that: (a) all documents in the system are known; (b) all documents in the system can be judged in advance for their usefulness (relevance) for any given problem; and (c) users' relevance judgments are a single event based solely on a text's content. Assumption (a) is valid only in the case of small test collections. Assumptions (b) and (c) are based on static notions relevance. A user's judgment of the usefulness of a document may vary with respect to his or her information seeking stage (Kuhlthau, 1991), criteria other than the topic of the document such as availability of the text (Barry, 1994), or his or her ability to express the information need to an intermediary or to an IR system (Belkin, 1980; Taylor, 1968). The second difficulty with the system approach is that language is treated as if it were precise. Although natural language processing systems have made tremendous strides in the past decade (Turtle, 1994), language will remain a problem for system designers. The reason for this is that language can be best understood by how it is used, rather than by what is said (Blair, 1990). In other words, it may be possible to understand more about what a user says to an intermediary if his or her motives or goals are understood. …
[1]
Hong Xie,et al.
Planned and Situated Aspects in Interactive IR: Patterns of User Interactive Intentions and Information Seeking Strategies
,
1997
.
[2]
Robert M. Losee,et al.
Feedback in Information Retrieval.
,
1996
.
[3]
Amanda Spink,et al.
Modeling Users' Successive Searches in Digital Environments: A National Science Foundation/British Library Funded Study
,
1998,
D Lib Mag..
[4]
Carol L. Barry.
User-Defined Relevance Criteria: An Exploratory Study
,
1994,
J. Am. Soc. Inf. Sci..
[5]
Amanda Spink,et al.
Study of Interactive Feedback During Mediated Information Retrieval
,
1997,
J. Am. Soc. Inf. Sci..
[6]
D. C. Blair,et al.
Language and Representation in Information Retrieval
,
1990
.
[7]
M. E. Maron,et al.
On Relevance, Probabilistic Indexing and Information Retrieval
,
1960,
JACM.
[8]
Paul B. Kantor,et al.
A study of information seeking and retrieving. III. Searchers, searches, and overlap
,
1988,
J. Am. Soc. Inf. Sci..
[9]
David Robins.
Dynamics and Dimensions of User Information Problems as Foci of Interaction in Information Retrieval.
,
1998
.
[10]
Paul B. Kantor,et al.
A study of information seeking and retrieving. I. background and methodology
,
1988
.
[11]
Carol Collier Kuhlthau,et al.
Inside the search process: Information seeking from the user's perspective
,
1991,
J. Am. Soc. Inf. Sci..
[12]
Raya Fidel,et al.
Moves in online searching
,
1985
.
[13]
Tefko Saracevic,et al.
The Stratified Model of Information Retrieval Interaction: Extension and Applications
,
1997
.
[14]
Amanda Spink,et al.
Partial Relevance Judgments During Interactive Information Retrieval: An Exploratory Study.
,
1997
.
[15]
Mei-Mei Wu.
Information interaction dialogue : a study of patron elicitation in the information retrieval interaction
,
1993
.
[16]
B. Dervin,et al.
Information needs and uses.
,
1986
.
[17]
Howard R. Turtle.
Natural language vs. Boolean query evaluation: a comparison of retrieval performance
,
1994,
SIGIR '94.
[18]
Amanda Spink,et al.
Elicitation Behavior During Mediated Information Retrieval
,
1998,
Inf. Process. Manag..
[19]
Nicholas J. Belkin,et al.
Knowledge Elicitation Using Discourse Analysis
,
1987,
Int. J. Man Mach. Stud..
[20]
Gerard Salton,et al.
A new comparison between conventional indexing (MEDLARS) and automatic text processing (SMART)
,
1972,
J. Am. Soc. Inf. Sci..
[21]
Lawrence E. Leonard,et al.
Inter-Indexer Consistency and Retrieval Effectiveness: Measurement of Relationships
,
1975
.
[22]
Amanda Spink,et al.
Searchers, The Subjects They Search, And Sufficiency: A Study Of A Large Sample Of Excite Searches
,
1998,
WebNet.
[23]
Nicholas J. Belkin,et al.
Intelligent Information Retrieval: Whose Intelligence?
,
1996,
ISI.