Robust Generalization and Safe Query-Specializationin Counterfactual Learning to Rank

Existing work in counterfactual Learning to Rank (LTR) has focussed on optimizing feature-based models that predict the optimal ranking based on document features. LTR methods based on bandit algorithms often optimize tabular models that memorize the optimal ranking per query. These types of model have their own advantages and disadvantages. Feature-based models provide very robust performance across many queries, including those previously unseen, however, the available features often limit the rankings the model can predict. In contrast, tabular models can converge on any possible ranking through memorization. However, memorization is extremely prone to noise, which makes tabular models reliable only when large numbers of user interactions are available. Can we develop a robust counterfactual LTR method that pursues memorization-based optimization whenever it is safe to do? We introduce the Generalization and Specialization (GENSPEC) algorithm, a robust feature-based counterfactual LTR method that pursues per-query memorization when it is safe to do so. Generalization and Specialization (GENSPEC) optimizes a single feature-based model for generalization: robust performance across all queries, and many tabular models for specialization: each optimized for high performance on a single query. GENSPEC uses novel relative high-confidence bounds to choose which model to deploy per query. By doing so, GENSPEC enjoys the high performance of successfully specialized tabular models with the robustness of a generalized feature-based model. Our results show that GENSPEC leads to optimal performance on queries with sufficient click data, while having robust behavior on queries with little or noisy data.

[1]  Michael Bendersky,et al.  Addressing Trust Bias for Unbiased Learning-to-Rank , 2019, WWW.

[2]  M. de Rijke,et al.  Safe Exploration for Optimizing Contextual Bandits , 2020, ACM Trans. Inf. Syst..

[3]  Qingyao Ai,et al.  Unbiased Learning to Rank: Online or Offline? , 2020, ArXiv.

[4]  M. de Rijke,et al.  Click-based Hot Fixes for Underperforming Torso Queries , 2016, SIGIR.

[5]  Filip Radlinski,et al.  Learning diverse rankings with multi-armed bandits , 2008, ICML '08.

[6]  M. de Rijke,et al.  Optimizing Ranking Models in an Online Setting , 2019, ECIR.

[7]  Zheng Wen,et al.  Cascading Bandits: Learning to Rank in the Cascade Model , 2015, ICML.

[8]  Thorsten Joachims,et al.  Interactively optimizing information retrieval systems as a dueling bandits problem , 2009, ICML '09.

[9]  M. de Rijke,et al.  Differentiable Unbiased Online Learning to Rank , 2018, CIKM.

[10]  M. de Rijke,et al.  BubbleRank: Safe Online Learning to Re-Rank via Implicit Click Feedback , 2018, UAI.

[11]  Mark Sanderson,et al.  Do user preferences and evaluation measures line up? , 2010, SIGIR.

[12]  Marc Najork,et al.  Position Bias Estimation for Unbiased Learning to Rank in Personal Search , 2018, WSDM.

[13]  Ryen W. White,et al.  Studying the use of popular destinations to enhance web search interaction , 2007, SIGIR.

[14]  M. de Rijke,et al.  When Inverse Propensity Scoring does not Work: Affine Corrections for Unbiased Learning to Rank , 2020, CIKM.

[15]  Philip S. Thomas,et al.  High-Confidence Off-Policy Evaluation , 2015, AAAI.

[16]  Yi Chang,et al.  Yahoo! Learning to Rank Challenge Overview , 2010, Yahoo! Learning to Rank Challenge.

[17]  Jimmy J. Lin,et al.  A cascade ranking model for efficient ranked retrieval , 2011, SIGIR.

[18]  Christopher J. C. Burges,et al.  From RankNet to LambdaRank to LambdaMART: An Overview , 2010 .

[19]  M. de Rijke,et al.  To Model or to Intervene: A Comparison of Counterfactual and Online Learning to Rank from User Interactions , 2019, SIGIR.

[20]  Olivier Cappé,et al.  Multiple-Play Bandits in the Position-Based Model , 2016, NIPS.

[21]  Thorsten Joachims,et al.  A General Framework for Counterfactual Learning-to-Rank , 2018, SIGIR.

[22]  Nick Craswell,et al.  An experimental comparison of click position-bias models , 2008, WSDM '08.

[23]  Tie-Yan Liu,et al.  Learning to rank for information retrieval , 2009, SIGIR.

[24]  M. de Rijke,et al.  Policy-Aware Unbiased Learning to Rank for Top-k Rankings , 2020, SIGIR.

[25]  A. M. Madni,et al.  Recommender systems in e-commerce , 2014, 2014 World Automation Congress (WAC).

[26]  Thorsten Joachims,et al.  Estimating Position Bias without Intrusive Interventions , 2018, WSDM.

[27]  Yifan Wu,et al.  Conservative Bandits , 2016, ICML.

[28]  Monika Henzinger,et al.  Analysis of a very large web search engine query log , 1999, SIGF.

[29]  Andrew Trotman,et al.  The Architecture of eBay Search , 2017, eCOM@SIGIR.

[30]  Cheng Li,et al.  The LambdaLoss Framework for Ranking Metric Optimization , 2018, CIKM.

[31]  Thorsten Joachims,et al.  Optimizing search engines using clickthrough data , 2002, KDD.

[32]  Daria Sorokina,et al.  Amazon Search: The Joy of Ranking Products , 2016, SIGIR.

[33]  André Mende,et al.  Beyond algorithms: Ranking at scale at Booking.com , 2020, ComplexRec-ImpactRS@RecSys.

[34]  Tao Qin,et al.  Introducing LETOR 4.0 Datasets , 2013, ArXiv.

[35]  Salvatore Orlando,et al.  Fast Ranking with Additive Ensembles of Oblivious and Non-Oblivious Regression Trees , 2016, ACM Trans. Inf. Syst..

[36]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[37]  Unifying Online and Counterfactual Learning to Rank , 2021 .

[38]  Guido Zuccon,et al.  Counterfactual Online Learning to Rank , 2020, ECIR.

[39]  Thorsten Joachims,et al.  Unbiased Learning-to-Rank with Biased Feedback , 2016, WSDM.

[40]  Zheng Wen,et al.  DCM Bandits: Learning to Rank with Multiple Clicks , 2016, ICML.

[41]  Filip Radlinski,et al.  How does clickthrough data reflect retrieval quality? , 2008, CIKM '08.

[42]  Csaba Szepesvari,et al.  Bandit Algorithms , 2020 .

[43]  Amanda Spink,et al.  U.S. versus European web searching trends , 2002, SIGF.

[44]  Katja Hofmann,et al.  Reusing historical interaction data for faster online learning to rank for IR , 2013, DIR.

[45]  Marc Najork,et al.  Learning to Rank with Selection Bias in Personal Search , 2016, SIGIR.