BLM-Rank: A Bayesian Linear Method for Learning to Rank and Its GPU Implementation

Ranking as an important task in information systems has many applications, such as document/webpage retrieval, collaborative filtering and advertising. The last decade has witnessed a growing interest in the study of learning to rank as a means to leverage training information in a system. In this paper, we propose a new learning to rank method, i.e. BLM-Rank, which uses a linear function to score samples and models the pairwise preference of samples relying on their scores under a Bayesian framework. A stochastic gradient approach is adopted to maximize the posterior probability in BLM-Rank. For industrial practice, we have also implemented the proposed algorithm on Graphic Processing Unit (GPU). Experimental results on LETOR have demonstrated that the proposed BLM-Rank method outperforms the state-of-the-art methods, including RankSVM-Struct, RankBoost, AdaRank-NDCG, AdaRank-MAP and ListNet. Moreover, the results have shown that the GPU implementation of the BLM-Rank method is ten-to-eleven times faster than its CPU counterpart in the training phase, and one-to-four times faster in the testing phase. key words: ranking, Bayesian Personalized Ranking, stochastic gradient method, GPU

[1]  Lars Schmidt-Thieme,et al.  Multi-relational matrix factorization using bayesian personalized ranking for social network data , 2012, WSDM '12.

[2]  Tao Qin,et al.  FRank: a ranking method with fidelity loss , 2007, SIGIR.

[3]  Christopher J. C. Burges,et al.  From RankNet to LambdaRank to LambdaMART: An Overview , 2010 .

[4]  Lars Schmidt-Thieme,et al.  BPR: Bayesian Personalized Ranking from Implicit Feedback , 2009, UAI.

[5]  Yunming Ye,et al.  HAR: Hub, Authority and Relevance Scores in Multi-Relational Data for Query Search , 2012, SDM.

[6]  Thore Graepel,et al.  Large Margin Rank Boundaries for Ordinal Regression , 2000 .

[7]  Tao Qin,et al.  LETOR: A benchmark collection for research on learning to rank for information retrieval , 2010, Information Retrieval.

[8]  Stephen E. Robertson,et al.  Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval , 1994, SIGIR '94.

[9]  Gregory N. Hullender,et al.  Learning to rank using gradient descent , 2005, ICML.

[10]  Tao Qin,et al.  LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval , 2007 .

[11]  Christopher J. C. Burges,et al.  High accuracy retrieval with multiple nested ranker , 2006, SIGIR.

[12]  Koby Crammer,et al.  Pranking with Ranking , 2001, NIPS.

[13]  Tom M. Mitchell,et al.  Using the Future to Sort Out the Present: Rankprop and Multitask Learning for Medical Risk Evaluation , 1995, NIPS.

[14]  Tie-Yan Liu,et al.  Learning to rank: from pairwise approach to listwise approach , 2007, ICML '07.

[15]  Li Chen,et al.  Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence GBPR: Group Preference Based Bayesian Personalized Ranking for One-Class Collaborative Filtering , 2022 .

[16]  Yunming Ye,et al.  MultiRank: co-ranking for objects and relations in multi-relational data , 2011, KDD.

[17]  Hang Li,et al.  AdaRank: a boosting algorithm for information retrieval , 2007, SIGIR.

[18]  Lars Schmidt-Thieme,et al.  Pairwise interaction tensor factorization for personalized tag recommendation , 2010, WSDM '10.

[19]  Dietmar Jannach,et al.  Using graded implicit feedback for bayesian personalized ranking , 2014, RecSys '14.

[20]  Yoram Singer,et al.  An Efficient Boosting Algorithm for Combining Preferences by , 2013 .

[21]  Stephen E. Robertson,et al.  Relevance weighting of search terms , 1976, J. Am. Soc. Inf. Sci..

[22]  Rajeev Motwani,et al.  The PageRank Citation Ranking : Bringing Order to the Web , 1999, WWW 1999.

[23]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[24]  Tong Zhang,et al.  Subset Ranking Using Regression , 2006, COLT.

[25]  W. Bruce Croft,et al.  A language modeling approach to information retrieval , 1998, SIGIR '98.