Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback based Recommendation

As users often express their preferences with binary behavior data~(implicit feedback), such as clicking items or buying products, implicit feedback based Collaborative Filtering~(CF) models predict the top ranked items a user might like by leveraging implicit user-item interaction data. For each user, the implicit feedback is divided into two sets: an observed item set with limited observed behaviors, and a large unobserved item set that is mixed with negative item behaviors and unknown behaviors. Given any user preference prediction model, researchers either designed ranking based optimization goals or relied on negative item mining techniques for better optimization. Despite the performance gain of these implicit feedback based models, the recommendation results are still far from satisfactory due to the sparsity of the observed item set for each user. To this end, in this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation. The optimization criteria of Set2setRank are two folds: First, we design an item to an item set comparison that encourages each observed item from the sampled observed set is ranked higher than any unobserved item from the sampled unobserved set. Second, we model set level comparison that encourages a margin between the distance summarized from the observed item set and the most "hard'' unobserved item from the sampled negative set. Further, an adaptive sampling technique is designed to implement these two goals. We have to note that our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches, and is time efficient in practice. Finally, extensive experiments on three real-world datasets demonstrate the superiority of our proposed approach.

[1]  Yiqun Liu,et al.  Effects of User Negative Experience in Mobile News Streaming , 2019, SIGIR.

[2]  Sang-Wook Kim,et al.  AR-CF: Augmenting Virtual Users and Items in Collaborative Filtering for Addressing Cold-Start Problems , 2020, SIGIR.

[3]  Zheng Qin,et al.  Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning , 2020, SIGIR.

[4]  Meng Wang,et al.  Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach , 2020, AAAI.

[5]  Congfu Xu,et al.  Adaptive Pairwise Preference Learning for Collaborative Recommendation with Implicit Feedbacks , 2014, CIKM.

[6]  Harald Steck,et al.  Item popularity and recommendation accuracy , 2011, RecSys '11.

[7]  Le Wu,et al.  Multiple Pairwise Ranking with Implicit Feedback , 2018, CIKM.

[8]  Qiang He,et al.  Time-aware distributed service recommendation with privacy-preservation , 2019, Inf. Sci..

[9]  Hongbo Deng,et al.  ESAM: Discriminative Domain Adaptation with Non-Displayed Items to Improve Long-Tail Performance , 2020, SIGIR.

[10]  Lars Schmidt-Thieme,et al.  BPR: Bayesian Personalized Ranking from Implicit Feedback , 2009, UAI.

[11]  Alan Hanjalic,et al.  List-wise learning to rank with matrix factorization for collaborative filtering , 2010, RecSys '10.

[12]  Le Wu,et al.  A Survey on Neural Recommendation: From Collaborative Filtering to Content and Context Enriched Recommendation , 2021, ArXiv.

[13]  Chunyan Miao,et al.  A Boosting Algorithm for Item Recommendation with Implicit Feedback , 2015, IJCAI.

[14]  Steffen Rendle,et al.  Improving pairwise learning for item recommendation from implicit feedback , 2014, WSDM.

[15]  Tie-Yan Liu,et al.  Listwise approach to learning to rank: theory and algorithm , 2008, ICML '08.

[16]  Jun Wang,et al.  Optimizing top-n collaborative filtering via dynamic negative item sampling , 2013, SIGIR.

[17]  Depeng Jin,et al.  An Improved Sampler for Bayesian Personalized Ranking by Leveraging View Data , 2018, WWW.

[18]  Chuan Qin,et al.  SetRank: A Setwise Bayesian Approach for Collaborative Ranking from Implicit Feedback , 2020, AAAI.

[19]  Min Zhao,et al.  Unifying explicit and implicit feedback for collaborative filtering , 2010, CIKM.

[20]  Hui Xiong,et al.  EKT: Exercise-Aware Knowledge Tracing for Student Performance Prediction , 2019, IEEE Transactions on Knowledge and Data Engineering.

[21]  Hafed Zarzour,et al.  A new collaborative filtering recommendation algorithm based on dimensionality reduction and clustering techniques , 2018, 2018 9th International Conference on Information and Communication Systems (ICICS).

[22]  Martha Larson,et al.  CLiMF: learning to maximize reciprocal rank with collaborative less-is-more filtering , 2012, RecSys.

[23]  Yongdong Zhang,et al.  LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation , 2020, SIGIR.

[24]  Lars Schmidt-Thieme,et al.  Personalized Ranking for Non-Uniformly Sampled Items , 2012, KDD Cup.

[25]  Tie-Yan Liu,et al.  Learning to rank: from pairwise approach to listwise approach , 2007, ICML '07.

[26]  Chih-Jen Lin,et al.  Selection of Negative Samples for One-class Matrix Factorization , 2017, SDM.

[27]  Yehuda Koren,et al.  Matrix Factorization Techniques for Recommender Systems , 2009, Computer.

[28]  Vasant Honavar,et al.  Top-N-Rank: A Scalable List-wise Ranking Method for Recommender Systems , 2018, 2018 IEEE International Conference on Big Data (Big Data).

[29]  Yiqun Liu,et al.  Jointly Non-Sampling Learning for Knowledge Graph Enhanced Recommendation , 2020, SIGIR.

[30]  Jun Xu,et al.  SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval , 2020, SIGIR.

[31]  Li Chen,et al.  CoFiSet: Collaborative Filtering via Learning Pairwise Preferences over Item-sets , 2013, SDM.

[32]  Lin Li,et al.  Asymmetric Bayesian personalized ranking for one-class collaborative filtering , 2019, RecSys.

[33]  Haibin Cheng,et al.  Real-time Personalization using Embeddings for Search Ranking at Airbnb , 2018, KDD.

[34]  Tie-Yan Liu,et al.  Ranking-Oriented Collaborative Filtering , 2016 .

[35]  Qiang Yang,et al.  One-Class Collaborative Filtering , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[36]  Martha Larson,et al.  TFMAP: optimizing MAP for top-n context-aware recommendation , 2012, SIGIR '12.

[37]  Huazheng Wang,et al.  Variance Reduction in Gradient Exploration for Online Learning to Rank , 2019, SIGIR.

[38]  Tie-Yan Liu,et al.  Listwise Collaborative Filtering , 2015, SIGIR.

[39]  Deborah Estrin,et al.  Collaborative Metric Learning , 2017, WWW.

[40]  Yiqun Liu,et al.  Beyond User Embedding Matrix: Learning to Hash for Modeling Large-Scale Users in Recommendation , 2020, SIGIR.

[41]  Martin Ester,et al.  A matrix factorization technique with trust propagation for recommendation in social networks , 2010, RecSys '10.

[42]  Shujian Huang,et al.  Deep Matrix Factorization Models for Recommender Systems , 2017, IJCAI.

[43]  Le Wu,et al.  Attentive Recurrent Social Recommendation , 2018, SIGIR.

[44]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[45]  Enhong Chen,et al.  Collaborative List-and-Pairwise Filtering From Implicit Feedback , 2022, IEEE Transactions on Knowledge and Data Engineering.

[46]  Pengfei Wang,et al.  Unified Collaborative Filtering over Graph Embeddings , 2019, SIGIR.

[47]  Scott Sanner,et al.  Noise Contrastive Estimation for One-Class Collaborative Filtering , 2019, SIGIR.

[48]  Li Chen,et al.  Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence GBPR: Group Preference Based Bayesian Personalized Ranking for One-Class Collaborative Filtering , 2022 .

[49]  Yanjie Fu,et al.  Joint Item Recommendation and Attribute Inference: An Adaptive Graph Convolutional Network Approach , 2020, SIGIR.

[50]  Tat-Seng Chua,et al.  Neural Graph Collaborative Filtering , 2019, SIGIR.

[51]  Suhrid Balakrishnan,et al.  Collaborative ranking , 2012, WSDM '12.

[52]  Wanxiang Che,et al.  Pre-Training with Whole Word Masking for Chinese BERT , 2019, ArXiv.

[53]  Cho-Jui Hsieh,et al.  SQL-Rank: A Listwise Approach to Collaborative Ranking , 2018, ICML.