Revisiting Iterative Relevance Feedback for Document and Passage Retrieval

As more and more search traffic comes from mobile phones, intelligent assistants, and smart-home devices, new challenges (e.g., limited presentation space) and opportunities come up in information retrieval. Previously, an effective technique, relevance feedback (RF), has rarely been used in real search scenarios due to the overhead of collecting users' relevance judgments. However, since users tend to interact more with the search results shown on the new interfaces, it becomes feasible to obtain users' assessments on a few results during each interaction. This makes iterative relevance feedback (IRF) techniques look promising today. IRF has not been studied systematically in the new search scenarios and its effectiveness is mostly unknown. In this paper, we re-visit IRF and extend it with RF models proposed in recent years. We conduct extensive experiments to analyze and compare IRF with the standard top-k RF framework on document and passage retrieval. Experimental results show that IRF is at least as effective as the standard top-k RF framework for documents and much more effective for passages. This indicates that IRF for passage retrieval has huge potential.

[1]  Maura R. Grossman,et al.  TREC 2016 Total Recall Track Overview , 2016, TREC.

[2]  M. E. Maron,et al.  On Relevance, Probabilistic Indexing and Information Retrieval , 1960, JACM.

[3]  Victor Lavrenko,et al.  A Generative Theory of Relevance , 2008, The Information Retrieval Series.

[4]  ChengXiang Zhai,et al.  A comparative study of methods for estimating query language models with pseudo feedback , 2009, CIKM.

[5]  Ben He,et al.  A Comparative Study of Pseudo Relevance Feedback for Ad-hoc Retrieval , 2011, ICTIR.

[6]  Gerard Salton,et al.  Improving retrieval performance by relevance feedback , 1997, J. Am. Soc. Inf. Sci..

[7]  W. Bruce Croft,et al.  Beyond Factoid QA: Effective Methods for Non-factoid Answer Sentence Retrieval , 2016, ECIR.

[8]  J. J. Rocchio,et al.  Relevance feedback in information retrieval , 1971 .

[9]  James Allan,et al.  A comparison of statistical significance tests for information retrieval evaluation , 2007, CIKM '07.

[10]  James Allan,et al.  Incremental relevance feedback for information filtering , 1996, SIGIR '96.

[11]  Stephen E. Robertson,et al.  Relevance weighting of search terms , 1976, J. Am. Soc. Inf. Sci..

[12]  IJsbrand Jan Aalbersberg,et al.  Incremental relevance feedback , 1992, SIGIR '92.

[13]  W. Bruce Croft,et al.  Relevance-Based Language Models , 2001, SIGIR '01.

[14]  Oren Kurland,et al.  Utilizing Focused Relevance Feedback , 2016, SIGIR.

[15]  Stephen E. Robertson,et al.  Okapi at TREC-3 , 1994, TREC.

[16]  Stephen E. Robertson,et al.  GatfordCentre for Interactive Systems ResearchDepartment of Information , 1996 .

[17]  John D. Lafferty,et al.  Model-based feedback in the language modeling approach to information retrieval , 2001, CIKM '01.

[18]  Gerard Salton,et al.  The SMART Retrieval System—Experiments in Automatic Document Processing , 1971 .

[19]  W. Bruce Croft,et al.  A language modeling approach to information retrieval , 1998, SIGIR '98.

[20]  Grace Hui Yang,et al.  TREC 2016 Dynamic Domain Track Overview , 2016, TREC.

[21]  W. Bruce Croft,et al.  Retrieving Passages and Finding Answers , 2014, ADCS '14.

[22]  Gerard Salton,et al.  A vector space model for automatic indexing , 1975, CACM.