A study on the use of summaries and summary-based query expansion for a question-answering task

In this paper we report an initial study on the effectiveness of query-biased summaries for a question answering task. Our summarisation system presents searchers with short summaries of documents. The summaries are composed of a set of sentences that highlight the main points of the document as they relate to the query. These summaries are also used as evidence for a query expansion algorithm to test the use of summaries as evidence for interactive and automatic query expansion. We present the results of a set of experiments to test these two approaches and discuss the relative success of these techniques.

[1]  Nicholas J. Belkin,et al.  A case for interaction: a study of interactive information retrieval behavior and effectiveness , 1996, CHI.

[2]  Gerard Salton,et al.  The SMART Retrieval System—Experiments in Automatic Document Processing , 1971 .

[3]  Donna K. Harman,et al.  Relevance feedback revisited , 1992, SIGIR '92.

[4]  Paul Over,et al.  The TREC-9 Interactive Track Report , 1999, TREC.

[5]  Gerard Salton,et al.  Improving retrieval performance by relevance feedback , 1997, J. Am. Soc. Inf. Sci..

[6]  Micheline Beaulieu,et al.  Experiments on interfaces to support query expansion , 1997, J. Documentation.

[7]  Donna K. Harman,et al.  Overview of the Eighth Text REtrieval Conference (TREC-8) , 1999, TREC.

[8]  Gerard Salton,et al.  Improving Retrieval Performance by Relevance Feedback , 1997 .

[9]  Mark Sanderson,et al.  Advantages of query biased summaries in information retrieval , 1998, SIGIR '98.

[10]  Mark Magennis,et al.  The potential and actual effectiveness of interactive query expansion , 1997, SIGIR '97.

[11]  Chris D. Paice,et al.  Constructing literature abstracts by computer: Techniques and prospects , 1990, Inf. Process. Manag..

[12]  Valerie Galpin,et al.  Relevance feedback in a public access catalogue for a research library: MUSCAT at the Scott Polar Research Institute Library , 1988 .

[13]  Donna K. Harman,et al.  Relevance Feedback and Other Query Modification Techniques , 1992, Information retrieval (Boston).

[14]  Efthimis N. Efthimiadis,et al.  User Choices: A new Yardstick for the Evaluation of Ranking Algorithms for Interactive Query Expansion , 1995, Inf. Process. Manag..

[15]  Lisa F. Rau,et al.  Automatic Condensation of Electronic Publications by Sentence Selection , 1995, Inf. Process. Manag..

[16]  H. P. Edmundson,et al.  Problems in automatic abstracting , 1964, CACM.

[17]  J. J. Rocchio,et al.  Relevance feedback in information retrieval , 1971 .

[18]  Ricardo Baeza-Yates,et al.  Information Retrieval: Data Structures and Algorithms , 1992 .

[19]  Laura L. Downey,et al.  A usability case study using TREC and ZPRISE , 1999, Inf. Process. Manag..

[20]  James E. Rush,et al.  Automatic abstracting and indexing. II. Production of indicative abstracts by application of contextual inference and syntactic coherence criteria , 1971 .

[21]  Amanda Spink,et al.  Toward a Theoretical Framework for Information Retrieval (IR) Evaluation in an Information Seeking Context , 1999, MIRA.