Optimizing Base Rankers Using Clicks - A Case Study Using BM25
暂无分享,去创建一个
M. de Rijke | Floor Sietsma | Shimon Whiteson | Maarten de Rijke | Anne Schuth | Anne Schuth | F. Sietsma | Shimon Whiteson
[1] Stephen E. Robertson,et al. Optimisation methods for ranking functions with multiple parameters , 2006, CIKM '06.
[2] Hang Yu,et al. ListOPT: Learning to Optimize for XML Ranking , 2011, PAKDD.
[3] Filip Radlinski,et al. How does clickthrough data reflect retrieval quality? , 2008, CIKM '08.
[4] Jimmy J. Lin,et al. Pseudo test collections for learning web search ranking functions , 2011, SIGIR.
[5] M. de Rijke,et al. Building simulated queries for known-item topics: an analysis using six european languages , 2007, SIGIR.
[6] M. de Rijke,et al. Pseudo test collections for training and tuning microblog rankers , 2013, SIGIR.
[7] Katja Hofmann,et al. Information Retrieval manuscript No. (will be inserted by the editor) Balancing Exploration and Exploitation in Listwise and Pairwise Online Learning to Rank for Information Retrieval , 2022 .
[8] Christopher J. C. Burges,et al. A machine learning approach for improved BM25 retrieval , 2009, CIKM.
[9] Thorsten Joachims,et al. Optimizing search engines using clickthrough data , 2002, KDD.
[10] Chao Liu,et al. Efficient multiple-click models in web search , 2009, WSDM '09.
[11] Tao Qin,et al. LETOR: A benchmark collection for research on learning to rank for information retrieval , 2010, Information Retrieval.
[12] Katja Hofmann,et al. A probabilistic method for inferring preferences from clicks , 2011, CIKM '11.
[13] Thorsten Joachims,et al. Interactively optimizing information retrieval systems as a dueling bandits problem , 2009, ICML '09.
[14] Hugo Zaragoza,et al. The Probabilistic Relevance Framework: BM25 and Beyond , 2009, Found. Trends Inf. Retr..
[15] Jonathan L. Herlocker,et al. Click data as implicit relevance feedback in web search , 2007, Inf. Process. Manag..
[16] Katja Hofmann,et al. Reusing historical interaction data for faster online learning to rank for IR , 2013, DIR.
[17] Hongyuan Zha,et al. Global ranking by exploiting user clicks , 2009, SIGIR.
[18] Jean Tague-Sutcliffe,et al. Problems in the simulation of bibliographic retrieval systems , 1980, SIGIR '80.
[19] Stephen E. Robertson,et al. A probabilistic model of information retrieval: development and comparative experiments - Part 2 , 2000, Inf. Process. Manag..
[20] Katja Hofmann,et al. Estimating interleaved comparison outcomes from historical click data , 2012, CIKM '12.
[21] Jaana Kekäläinen,et al. Cumulated gain-based evaluation of IR techniques , 2002, TOIS.
[22] Tie-Yan Liu,et al. Learning to rank for information retrieval , 2009, SIGIR.
[23] Katja Hofmann,et al. Lerot: an online learning to rank framework , 2013, LivingLab '13.
[24] Filip Radlinski,et al. Learning diverse rankings with multi-armed bandits , 2008, ICML '08.
[25] Stephen E. Robertson,et al. Okapi at TREC-3 , 1994, TREC.
[26] Stephen E. Robertson,et al. GatfordCentre for Interactive Systems ResearchDepartment of Information , 1996 .
[27] Ben Carterette,et al. Evaluating Search Engines by Modeling the Relationship Between Relevance and Clicks , 2007, NIPS.