CoSam: An Efficient Collaborative Adaptive Sampler for Recommendation

Sampling strategies have been widely applied in many recommendation systems to accelerate model learning from implicit feedback data. A typical strategy is to draw negative instances with uniform distribution, which however will severely affect model's convergency, stability, and even recommendation accuracy. A promising solution for this problem is to over-sample the ``difficult'' (a.k.a informative) instances that contribute more on training. But this will increase the risk of biasing the model and leading to non-optimal results. Moreover, existing samplers are either heuristic, which require domain knowledge and often fail to capture real ``difficult'' instances; or rely on a sampler model that suffers from low efficiency. To deal with these problems, we propose an efficient and effective collaborative sampling method CoSam, which consists of: (1) a collaborative sampler model that explicitly leverages user-item interaction information in sampling probability and exhibits good properties of normalization, adaption, interaction information awareness, and sampling efficiency; and (2) an integrated sampler-recommender framework, leveraging the sampler model in prediction to offset the bias caused by uneven sampling. Correspondingly, we derive a fast reinforced training algorithm of our framework to boost the sampler performance and sampler-recommender collaboration. Extensive experiments on four real-world datasets demonstrate the superiority of the proposed collaborative sampler model and integrated sampler-recommender framework.

[1]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[2]  Diane Kelly,et al.  IMPLICIT FEEDBACK: USING BEHAVIOR TO INFER RELEVANCE , 2005 .

[3]  Jun Wang,et al.  Optimizing top-n collaborative filtering via dynamic negative item sampling , 2013, SIGIR.

[4]  Tong Zhao,et al.  Leveraging Social Connections to Improve Personalized Ranking for Collaborative Filtering , 2014, CIKM.

[5]  Yifan Hu,et al.  Collaborative Filtering for Implicit Feedback Datasets , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[6]  Xiangnan He,et al.  Bias and Debias in Recommender System: A Survey and Future Directions , 2020, ACM Trans. Inf. Syst..

[7]  Matthew D. Hoffman,et al.  Variational Autoencoders for Collaborative Filtering , 2018, WWW.

[8]  Kun Gai,et al.  Learning Tree-based Deep Model for Recommender Systems , 2018, KDD.

[9]  Tat-Seng Chua,et al.  Neural Graph Collaborative Filtering , 2019, SIGIR.

[10]  Qiang Yang,et al.  One-Class Collaborative Filtering , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[11]  Peng Zhang,et al.  IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models , 2017, SIGIR.

[12]  Christopher C. Johnson Logistic Matrix Factorization for Implicit Feedback Data , 2014 .

[13]  Xiangnan He,et al.  A Generic Coordinate Descent Framework for Learning from Implicit Feedback , 2016, WWW.

[14]  Tat-Seng Chua,et al.  Neural Collaborative Filtering , 2017, WWW.

[15]  Xiangnan He,et al.  Estimation-Action-Reflection: Towards Deep Interaction Between Conversational and Recommender Systems , 2020, WSDM.

[16]  Martin Ester,et al.  TrustWalker: a random walk model for combining trust-based and item-based recommendation , 2009, KDD.

[17]  Jeffery P. Bray,et al.  Consumer Behaviour Theory: Approaches and Models , 2008 .

[18]  Steffen Rendle,et al.  Improving pairwise learning for item recommendation from implicit feedback , 2014, WSDM.

[19]  Xiaoyu Du,et al.  Outer Product-based Neural Collaborative Filtering , 2018, IJCAI.

[20]  Chun Chen,et al.  Modeling Users' Exposure with Social Knowledge Influence and Consumption Influence for Recommendation , 2018, CIKM.

[21]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[22]  Depeng Jin,et al.  An Improved Sampler for Bayesian Personalized Ranking by Leveraging View Data , 2018, WWW.

[23]  Tat-Seng Chua,et al.  Fast Matrix Factorization for Online Recommendation with Implicit Feedback , 2016, SIGIR.

[24]  Reid A. Johnson,et al.  Calibrating Probability with Undersampling for Unbalanced Classification , 2015, 2015 IEEE Symposium Series on Computational Intelligence.

[25]  Depeng Jin,et al.  Sampler Design for Bayesian Personalized Ranking by Leveraging View Data , 2018, IEEE Transactions on Knowledge and Data Engineering.

[26]  Zi Huang,et al.  Neural Memory Streaming Recommender Networks with Adversarial Training , 2018, KDD.

[27]  Depeng Jin,et al.  Reinforced Negative Sampling for Recommendation with Exposure Data , 2019, IJCAI.

[28]  Jiliang Tang,et al.  Deep Adversarial Social Recommendation , 2019, IJCAI.

[29]  Yixin Cao,et al.  Reinforced Negative Sampling over Knowledge Graph for Recommendation , 2020, WWW.

[30]  Zoubin Ghahramani,et al.  Stochastic Inference for Scalable Probabilistic Modeling of Binary Matrices , 2014, ICML.

[31]  Weinan Zhang,et al.  LambdaFM: Learning Optimal Ranking with Factorization Machines Using Lambda Surrogates , 2016, CIKM.

[32]  Xiangliang Zhang,et al.  WalkRanker: A Unified Pairwise Ranking Model With Multiple Relations for Item Recommendation , 2018, AAAI.

[33]  Xing Xie,et al.  xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems , 2018, KDD.

[34]  Abraham Bernstein,et al.  Blockbusters and Wallflowers: Accurate, Diverse, and Scalable Recommendations with Random Walks , 2015, RecSys.

[35]  Deshi Ye,et al.  Adaptive Influence Blocking: Minimizing the Negative Spread by Observation-Based Policies , 2019, 2019 IEEE 35th International Conference on Data Engineering (ICDE).

[36]  Heng Huang,et al.  Self-Paced Network Embedding , 2018, KDD.

[37]  Yan Feng,et al.  SamWalker: Social Recommendation with Informative Sampling Strategy , 2019, WWW.

[38]  Chih-Jen Lin,et al.  Selection of Negative Samples for One-class Matrix Factorization , 2017, SDM.

[39]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[40]  Shujian Huang,et al.  Deep Matrix Factorization Models for Recommender Systems , 2017, IJCAI.

[41]  Paul Covington,et al.  Deep Neural Networks for YouTube Recommendations , 2016, RecSys.

[42]  Yuta Saito,et al.  Unbiased Recommender Learning from Missing-Not-At-Random Implicit Feedback , 2020, WSDM.

[43]  Bamshad Mobasher,et al.  Weighted Random Walk Sampling for Multi-Relational Recommendation , 2017, UMAP.

[44]  Peng Cui,et al.  On the Equivalence of Decoupled Graph Convolution Network and Label Propagation , 2021, WWW.

[45]  Yongdong Zhang,et al.  LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation , 2020, SIGIR.

[46]  Bo Song,et al.  Neural Collaborative Ranking , 2018, CIKM.

[47]  Dit-Yan Yeung,et al.  Collaborative Deep Learning for Recommender Systems , 2014, KDD.

[48]  Yi Chang,et al.  Adversarial Sampling and Training for Semi-Supervised Information Retrieval , 2018, WWW.

[49]  Martin Ester,et al.  DGE: Deep Generative Network Embedding Based on Commonality and Individuality , 2020, AAAI.

[50]  Jimmy J. Lin,et al.  Fast candidate generation for two-phase document ranking: postings list intersection with bloom filters , 2012, CIKM.

[51]  David A. McAllester,et al.  Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence , 2009, UAI 2009.

[52]  Jure Leskovec,et al.  Graph Convolutional Neural Networks for Web-Scale Recommender Systems , 2018, KDD.

[53]  Xin Wang,et al.  Social Recommendation with Strong and Weak Ties , 2016, CIKM.

[54]  Douglas W. Oard,et al.  Implicit Feedback for Recommender Systems , 1998 .

[55]  Lars Schmidt-Thieme,et al.  BPR: Bayesian Personalized Ranking from Implicit Feedback , 2009, UAI.

[56]  Bernhard Schölkopf,et al.  Learning with Local and Global Consistency , 2003, NIPS.

[57]  Xing Xie,et al.  A Novel User Representation Paradigm for Making Personalized Candidate Retrieval , 2019, ArXiv.

[58]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[59]  Weinan Zhang,et al.  BoostFM: Boosted Factorization Machines for Top-N Feature-based Recommendation , 2017, IUI.