Scoring missing terms in information retrieval tasks

An usual approach to address mismatching vocabulary problem is to augment the original query using dictionaries and other lexical resources and/or by looking at pseudo-relevant documents. Either way, terms are added to form a new query that will be used to score all documents in a subsequent retrieval pass, and as consequence the original query's focus may drift because of the newly added terms. We propose a new method to address the mismatching vocabulary problem, expanding original query terms only when necessary and complementing the user query for missing terms while scoring documents. It allows related semantic aspects to be included in a conservative and selective way, thus reducing the possibility of query drift. Our results using replacements for the <i>missing query terms</i> in modified document and passages retrieval methods show significant improvement over the original ones.

[1]  Peter D. Turney Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL , 2001, ECML.

[2]  Charles L. A. Clarke,et al.  Frequency Estimates for Statistical Word Similarity Measures , 2003, NAACL.

[3]  Charles L. A. Clarke,et al.  Exploiting redundancy in question answering , 2001, SIGIR '01.

[4]  Eitan Farchi,et al.  Automatic query wefinement using lexical affinities with maximal information gain , 2002, SIGIR '02.

[5]  Charles L. A. Clarke,et al.  The effect of document retrieval quality on factoid question answering performance , 2004, Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.

[6]  James Allan,et al.  Automatic Query Expansion Using SMART: TREC 3 , 1994, TREC.

[7]  Carolyn J. Crouch,et al.  Improving the retrieval effectiveness of very short queries , 2002, Inf. Process. Manag..

[8]  W. Bruce Croft,et al.  Relevance-Based Language Models , 2001, SIGIR '01.

[9]  W. Bruce Croft,et al.  Corpus-based stemming using cooccurrence of word variants , 1998, TOIS.

[10]  Charles L. A. Clarke,et al.  Fast Computation of Lexical Affinity Models , 2004, COLING.

[11]  Gerard Salton,et al.  A vector space model for automatic indexing , 1975, CACM.

[12]  Charles L. A. Clarke,et al.  Task-Specific Query Expansion (MultiText Experiments for TREC 2003) , 2003, TREC.

[13]  W. Bruce Croft,et al.  Improving the effectiveness of information retrieval with local context analysis , 2000, TOIS.

[14]  Chris Buckley,et al.  Improving automatic query expansion , 1998, SIGIR '98.

[15]  Charles L. A. Clarke,et al.  The impact of corpus size on question answering performance , 2002, SIGIR '02.

[16]  Gerard Salton,et al.  The SMART Retrieval System , 1971 .

[17]  Jimmy J. Lin,et al.  Extracting Answers from the Web Using Data Annotation and Knowledge Mining Techniques , 2002, TREC.

[18]  Douglas W. Oard,et al.  Probabilistic structured query methods , 2003, SIGIR.

[19]  John D. Lafferty,et al.  Model-based feedback in the language modeling approach to information retrieval , 2001, CIKM '01.

[20]  John D. Lafferty,et al.  Information Retrieval as Statistical Translation , 2017 .

[21]  Carol Peters,et al.  Cross-Language Information Retrieval (CLIR) Track Overview , 1997, TREC.

[22]  Jianfeng Gao,et al.  Resolving query translation ambiguity using a decaying co-occurrence model and syntactic dependence relations , 2002, SIGIR '02.

[23]  Gerard Salton,et al.  The SMART Retrieval System—Experiments in Automatic Document Processing , 1971 .

[24]  Jimmy J. Lin,et al.  Quantitative evaluation of passage retrieval algorithms for question answering , 2003, SIGIR.

[25]  Stephen E. Robertson,et al.  A probabilistic model of information retrieval: development and comparative experiments - Part 1 , 2000, Inf. Process. Manag..