暂无分享,去创建一个
[1] M. de Rijke,et al. Relative confidence sampling for efficient on-line ranker evaluation , 2014, WSDM.
[2] Chao Liu,et al. Efficient multiple-click models in web search , 2009, WSDM '09.
[3] Maarten de Rijke,et al. Probabilistic Multileave Gradient Descent , 2016, ECIR.
[4] Yisong Yue,et al. Beyond position bias: examining result attractiveness as a source of presentation bias in clickthrough data , 2010, WWW '10.
[5] Thorsten Joachims,et al. Optimizing search engines using clickthrough data , 2002, KDD.
[6] Maarten de Rijke,et al. Sensitive and Scalable Online Evaluation with Theoretical Guarantees , 2017, CIKM.
[7] Tie-Yan Liu,et al. Learning to Rank for Information Retrieval , 2011 .
[8] Huazheng Wang,et al. Efficient Exploration of Gradient Space for Online Learning to Rank , 2018, SIGIR.
[9] M. de Rijke,et al. Online Exploration for Detecting Shifts in Fresh Intent , 2014, CIKM.
[10] Katja Hofmann,et al. Balancing Exploration and Exploitation in Learning to Rank Online , 2011, ECIR.
[11] Tong Zhao,et al. Constructing Reliable Gradient Exploration for Online Learning to Rank , 2016, CIKM.
[12] M. de Rijke,et al. Probabilistic Multileave for Online Retrieval Evaluation , 2015, SIGIR.
[13] Thorsten Joachims,et al. Interactively optimizing information retrieval systems as a dueling bandits problem , 2009, ICML '09.
[14] Mark Sanderson,et al. Test Collection Based Evaluation of Information Retrieval Systems , 2010, Found. Trends Inf. Retr..
[15] Yi Chang,et al. Yahoo! Learning to Rank Challenge Overview , 2010, Yahoo! Learning to Rank Challenge.
[16] M. de Rijke,et al. Multileave Gradient Descent for Fast Online Learning to Rank , 2016, WSDM.
[17] W. Bruce Croft,et al. Unbiased Learning to Rank with Unbiased Propensity Estimation , 2018, SIGIR.
[18] Katja Hofmann,et al. Reusing historical interaction data for faster online learning to rank for IR , 2013, DIR.
[19] Shinichi Nakajima,et al. Global analytic solution of fully-observed variational Bayesian matrix factorization , 2013, J. Mach. Learn. Res..
[20] M. de Rijke,et al. A Neural Click Model for Web Search , 2016, WWW.
[21] Falk Scholer,et al. On Crowdsourcing Relevance Magnitudes for Information Retrieval Evaluation , 2017, ACM Trans. Inf. Syst..
[22] Filip Radlinski,et al. Optimized interleaving for online retrieval evaluation , 2013, WSDM.
[23] Tao Qin,et al. Introducing LETOR 4.0 Datasets , 2013, ArXiv.
[24] M. de Rijke,et al. Multileaved Comparisons for Fast Online Evaluation , 2014, CIKM.
[25] Salvatore Orlando,et al. Fast Ranking with Additive Ensembles of Oblivious and Non-Oblivious Regression Trees , 2016, ACM Trans. Inf. Syst..
[26] Susan T. Dumais. Keynote: The Web Changes Everything: Understanding and Supporting People in Dynamic Information Environments , 2010, ECDL.
[27] M. de Rijke,et al. Differentiable Unbiased Online Learning to Rank , 2018, CIKM.
[28] M. de Rijke,et al. Click Models for Web Search , 2015, Click Models for Web Search.
[29] Katja Hofmann,et al. A probabilistic method for inferring preferences from clicks , 2011, CIKM '11.
[30] M. de Rijke,et al. Balancing Speed and Quality in Online Learning to Rank for Information Retrieval , 2017, CIKM.
[31] Pertti Vakkari,et al. Changes in relevance criteria and problem stages in task performance , 2000, J. Documentation.
[32] Marc Najork,et al. Learning to Rank with Selection Bias in Personal Search , 2016, SIGIR.
[33] ChengXiang Zhai,et al. Evaluation of methods for relative comparison of retrieval systems based on clickthroughs , 2009, CIKM.
[34] Katja Hofmann,et al. Fast and reliable online learning to rank for information retrieval , 2013, SIGIR Forum.