Inoculating relevance feedback against poison pills
暂无分享,去创建一个
Relevance Feedback is a common approach for enriching queries, given a set of explicitly or implicitly judged documents to improve the performance of the retrieval. Although it has been shown that on average, the overall performance of retrieval will be improved after relevance feedback, for some topics, employing some relevant documents may decrease the average precision of the initial run. This is mostly because the feedback document is partially relevant and contains off-topic terms which adding them to the query as expansion terms results in loosing the retrieval performance. These relevant documents that hurt the performance of retrieval after feedback are called "poison pills". In this paper, we discuss the effect of poison pills on the relevance feedback and present significant words language models as an approach for estimating feedback model to tackle this problem.
[1] Egidio L. Terra,et al. Poison pills: harmful relevant documents in feedback , 2005, CIKM '05.
[2] Mostafa Dehghani. Significant Words Representations of Entities , 2016, SIGIR.
[3] Djoerd Hiemstra,et al. Luhn Revisited: Significant Words Language Models , 2016, CIKM.
[4] Jaap Kamps,et al. The Healing Power of Poison: Helpful Non-relevant Documents in Feedback , 2016, CIKM.