Language modeling is an effective and theoretically attractive probabilistic framework for text information retrieval. The basic idea of this approach is to estimate a language model of a given document (or document set), and then do retrieval or classification based on this model. A common language modeling approach assumes the data D is generated from a mixture of several language models. The core problem is to find the maximum likelihood estimation of one language model mixture, given the fixed mixture weights and the other language model mixture. The EM algorithm is usually used to find the solution. In this paper, we proof that an exact maximum likelihood estimation of the unknown mixture component exists and can be calculated using the new algorithm we proposed. We further improve the algorithm and provide an efficient algorithm of O(k) complexity to find the exact solution, where k is the number of words occurring at least once in data D. Furthermore, we proof the probabilities of many words are exactly zeros, and the MLE estimation is implemented as a feature selection technique explicitly.
[1]
Xiaohua Hu,et al.
Context-sensitive semantic smoothing for the language modeling approach to genomic IR
,
2006,
SIGIR.
[2]
Yi Zhang,et al.
Novelty and redundancy detection in adaptive filtering
,
2002,
SIGIR '02.
[3]
James Allan,et al.
Retrieval and novelty detection at the sentence level
,
2003,
SIGIR.
[4]
Djoerd Hiemstra,et al.
Parsimonious language models for information retrieval
,
2004,
SIGIR '04.
[5]
Richard M. Schwartz,et al.
A hidden Markov model information retrieval system
,
1999,
SIGIR '99.
[6]
Yi Zhang,et al.
Exact Maximum Likelihood Estimation for Word Mixtures
,
2002
.
[7]
John D. Lafferty,et al.
Model-based feedback in the language modeling approach to information retrieval
,
2001,
CIKM '01.
[8]
Djoerd Hiemstra,et al.
Twenty-One at TREC-8: using Language Technology for Information Retrieval
,
1999,
TREC.