Learning to re-rank web search results with multiple pairwise features

Web search ranking functions are typically learned to rank search results based on features of individual documents, i.e., pointwise features. Hence, the rich relationships among documents, which contain multiple types of useful information, are either totally ignored or just explored very limitedly. In this paper, we propose to explore multiple pairwise relationships between documents in a learning setting to rerank search results. In particular, we use a set of pairwise features to capture various kinds of pairwise relationships and design two machine learned re-ranking methods to effectively combine these features with a base ranking function: a pairwise comparison method and a pairwise function decomposition method. Furthermore, we propose several schemes to estimate the potential gains of our re-ranking methods on each query and selectively apply them to queries with high confidence. Our experiments on a large scale commercial search engine editorial data set show that considering multiple pairwise relationships is quite beneficial and our proposed methods can achieve significant gain over methods which only consider pointwise features or a single type of pairwise relationship.

[1]  Tao Qin,et al.  Learning to rank relational objects and its application to web search , 2008, WWW.

[2]  Hongyuan Zha,et al.  Global ranking by exploiting user clicks , 2009, SIGIR.

[3]  Maksims Volkovs,et al.  BoltzRank: learning to maximize expected ranking gain , 2009, ICML '09.

[4]  Eyke Hüllermeier,et al.  Label ranking by learning pairwise preferences , 2008, Artif. Intell..

[5]  Hongyuan Zha,et al.  A regression framework for learning ranking functions using relative relevance judgments , 2007, SIGIR.

[6]  Thorsten Joachims,et al.  Accurately Interpreting Clickthrough Data as Implicit Feedback , 2017 .

[7]  Thorsten Joachims,et al.  Optimizing search engines using clickthrough data , 2002, KDD.

[8]  C. J. van Rijsbergen,et al.  The use of hierarchic clustering in information retrieval , 1971, Inf. Storage Retr..

[9]  Yoram Singer,et al.  An Efficient Boosting Algorithm for Combining Preferences by , 2013 .

[10]  Hang Li,et al.  AdaRank: a boosting algorithm for information retrieval , 2007, SIGIR.

[11]  Olivier Chapelle,et al.  A dynamic bayesian network click model for web search ranking , 2009, WWW '09.

[12]  Noga Alon,et al.  Ranking Tournaments , 2006, SIAM J. Discret. Math..

[13]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[14]  Susan T. Dumais,et al.  Learning user interaction models for predicting web search result preferences , 2006, SIGIR.

[15]  Mehryar Mohri,et al.  Magnitude-preserving ranking algorithms , 2007, ICML '07.

[16]  Nick Craswell,et al.  An experimental comparison of click position-bias models , 2008, WSDM '08.

[17]  Tie-Yan Liu,et al.  Learning to rank: from pairwise approach to listwise approach , 2007, ICML '07.

[18]  Hongyuan Zha,et al.  A General Boosting Method and its Application to Learning Ranking Functions for Web Search , 2007, NIPS.

[19]  Chao Liu,et al.  Post-rank reordering: resolving preference misalignments between search engines and end users , 2009, CIKM.

[20]  Gregory N. Hullender,et al.  Learning to rank using gradient descent , 2005, ICML.

[21]  Christopher J. C. Burges,et al.  Ranking as Function Approximation , 2007 .

[22]  John Guiver,et al.  Learning to rank with SoftRank and Gaussian processes , 2008, SIGIR '08.

[23]  Fernando Diaz,et al.  Regularizing ad hoc retrieval scores , 2005, CIKM '05.

[24]  Stephen E. Robertson,et al.  SoftRank: optimizing non-smooth rank metrics , 2008, WSDM '08.

[25]  Filip Radlinski,et al.  Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search , 2007, TOIS.

[26]  Ciya Liao,et al.  A model to estimate intrinsic document relevance from the clickthrough logs of a web search engine , 2010, WSDM '10.

[27]  Tao Qin,et al.  Global Ranking Using Continuous Conditional Random Fields , 2008, NIPS.