Pairwise learning in recommendation: experiments with community recommendation on linkedin

Many online systems present a list of recommendations and infer user interests implicitly from clicks or other contextual actions. For modeling user feedback in such settings, a common approach is to consider items acted upon to be relevant to the user, and irrelevant otherwise. However, clicking some but not others conveys an implicit ordering of the presented items. Pairwise learning, which leverages such implicit ordering between a pair of items, has been successful in areas such as search ranking. In this work, we study whether pairwise learning can improve community recommendation. We first present two novel pairwise models adapted from logistic regression. Both offline and online experiments in a large real-world setting show that incorporating pairwise learning improves the recommendation performance. However, the improvement is only slight. We find that users' preferences regarding the kinds of communities they like can differ greatly, which adversely affect the effectiveness of features derived from pairwise comparisons. We therefore propose a probabilistic latent semantic indexing model for pairwise learning (Pairwise PLSI), which assumes a set of users' latent preferences between pairs of items. Our experiments show favorable results for the Pairwise PLSI model and point to the potential of using pairwise learning for community recommendation.

[1]  Rong Pan,et al.  Mind the gaps: weighting the unknown in large-scale one-class collaborative filtering , 2009, KDD.

[2]  Mark Levene,et al.  Presentation bias is significant in determining user preference for search results - A user study , 2009, J. Assoc. Inf. Sci. Technol..

[3]  Hang Li Learning to Rank for Information Retrieval and Natural Language Processing , 2011, Synthesis Lectures on Human Language Technologies.

[4]  Nir Ailon,et al.  Ranking from pairs and triplets: information quality, evaluation methods and query complexity , 2011, WSDM '11.

[5]  Massih-Reza Amini,et al.  Learning to Rank for Collaborative Filtering , 2007, ICEIS.

[6]  Nick Craswell,et al.  An experimental comparison of click position-bias models , 2008, WSDM '08.

[7]  Thorsten Joachims,et al.  Optimizing search engines using clickthrough data , 2002, KDD.

[8]  Thomas Hofmann,et al.  Latent semantic models for collaborative filtering , 2004, TOIS.

[9]  Paramesh Ray Independence of Irrelevant Alternatives , 1973 .

[10]  Mehran Sahami,et al.  Evaluating similarity measures: a large-scale study in the orkut social network , 2005, KDD '05.

[11]  Tie-Yan Liu,et al.  Learning to rank: from pairwise approach to listwise approach , 2007, ICML '07.

[12]  Sumit Chopra,et al.  Two of a kind or the ratings game? Adaptive pairwise preferences and latent factor models , 2010, 2010 IEEE International Conference on Data Mining.

[13]  Qiang Yang,et al.  One-Class Collaborative Filtering , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[14]  Toshihiro Kamishima,et al.  Nantonac collaborative filtering: recommendation based on order responses , 2003, KDD '03.

[15]  Min Zhao,et al.  Probabilistic latent preference analysis for collaborative filtering , 2009, CIKM.

[16]  Edward Y. Chang,et al.  Collaborative filtering for orkut communities: discovery of user latent behavior , 2009, WWW '09.

[17]  Abhinandan Das,et al.  Google news personalization: scalable online collaborative filtering , 2007, WWW '07.

[18]  Edward Y. Chang,et al.  Combinational collaborative filtering for personalized community recommendation , 2008, KDD.

[19]  Shengcai Liao,et al.  Which photo groups should I choose? A comparative study of recommendation algorithms in Flickr , 2010, J. Inf. Sci..

[20]  Filip Radlinski,et al.  Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search , 2007, TOIS.

[21]  Lars Schmidt-Thieme,et al.  BPR: Bayesian Personalized Ranking from Implicit Feedback , 2009, UAI.

[22]  Tie-Yan Liu,et al.  Learning to rank for information retrieval , 2009, SIGIR.