Decoupled Side Information Fusion for Sequential Recommendation

Side information fusion for sequential recommendation (SR) aims to effectively leverage various side information to enhance the performance of next-item prediction. Most state-of-the-art methods build on self-attention networks and focus on exploring various solutions to integrate the item embedding and side information embeddings before the attention layer. However, our analysis shows that the early integration of various types of embeddings limits the expressiveness of attention matrices due to a rank bottleneck and constrains the flexibility of gradients. Also, it involves mixed correlations among the different heterogeneous information resources, which brings extra disturbance to attention calculation. Motivated by this, we propose Decoupled Side Information Fusion for Sequential Recommendation (DIF-SR), which moves the side information from the input to the attention layer and decouples the attention calculation of various side information and item representation. We theoretically and empirically show that the proposed solution allows higher-rank attention matrices and flexible gradients to enhance the modeling capacity of side information fusion. Also, auxiliary attribute predictors are proposed to further activate the beneficial interaction between side information and item representation learning. Extensive experiments on four real-world datasets demonstrate that our proposed solution stably outperforms state-of-the-art SR models. Further studies show that our proposed solution can be readily incorporated into current attention-based SR models and significantly boost performance. Our source code is available at https://github.com/AIM-SE/DIF-SR.

[1]  S. Parthasarathy,et al.  $\mathop {\mathtt {HAM}}$HAM: Hybrid Associations Models for Sequential Recommendation , 2020, IEEE Trans. Knowl. Data Eng..

[2]  Tong Chen,et al.  Lightweight Self-Attentive Sequential Recommendation , 2021, CIKM.

[3]  Andreas Hotho,et al.  A Case Study on Sampling Strategies for Evaluating Neural Sequential Item Recommendation Models , 2021, RecSys.

[4]  Lei Shi,et al.  ICAI-SR: Item Categorical Attribute Integrated Sequential Recommendation , 2021, SIGIR.

[5]  Shu Wu,et al.  Motif-aware Sequential Recommendation , 2021, SIGIR.

[6]  Ji-Rong Wen,et al.  Lighter and Better: Low-Rank Decomposed Self-Attention Networks for Next-Item Recommendation , 2021, SIGIR.

[7]  Chen Gao,et al.  Sequential Recommendation with Graph Neural Networks , 2021, SIGIR.

[8]  Philip S. Yu,et al.  Augmenting Sequential Recommendation with Pseudo-Prior Items via Reversely Pre-training Transformer , 2021, SIGIR.

[9]  Hyung Won Chung,et al.  A Simple and Effective Positional Encoding for Transformers , 2021, EMNLP.

[10]  Xiaoguang Li,et al.  Non-invasive Self-attention for Side Information Fusion in Sequential Recommendation , 2021, AAAI.

[11]  Ji-Rong Wen,et al.  RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms , 2020, CIKM.

[12]  Tie-Yan Liu,et al.  Rethinking Positional Encoding in Language Pre-training , 2020, ICLR.

[13]  Zhong Ming,et al.  FISSA: Fusing Item Similarity Models with Self-Attention Networks for Sequential Recommendation , 2020, ACM Conference on Recommender Systems.

[14]  Cho-Jui Hsieh,et al.  SSE-PT: Sequential Recommendation Via Personalized Transformer , 2020, RecSys.

[15]  Ji-Rong Wen,et al.  S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization , 2020, CIKM.

[16]  M. de Rijke,et al.  Attribute-aware Diversification for Sequential Recommendations , 2020, ArXiv.

[17]  Walid Krichene,et al.  On Sampled Metrics for Item Recommendation , 2020, KDD.

[18]  Yin Zhang,et al.  Consistency-Aware Recommendation for User-Generated Item List Continuation , 2019, WSDM.

[19]  Shuo Cheng,et al.  CosRec: 2D Convolutional Neural Networks for Sequential Recommendation , 2019, CIKM.

[20]  Deqing Wang,et al.  Feature-level Deeper Self-Attention Network for Sequential Recommendation , 2019, IJCAI.

[21]  Guihai Chen,et al.  Dual Sequential Prediction Models Linking Sequential Recommendation and Information Dissemination , 2019, KDD.

[22]  Lei Zheng,et al.  Gated Spectral Units: Modeling Co-evolving Patterns for Sequential Recommendation , 2019, SIGIR.

[23]  Chen Ma,et al.  Hierarchical Gating Networks for Sequential Recommendation , 2019, KDD.

[24]  Peng Jiang,et al.  BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer , 2019, CIKM.

[25]  Joemon M. Jose,et al.  A Simple Convolutional Generative Network for Next Item Recommendation , 2018, WSDM.

[26]  Julian J. McAuley,et al.  Self-Attentive Sequential Recommendation , 2018, 2018 IEEE International Conference on Data Mining (ICDM).

[27]  Ke Wang,et al.  Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding , 2018, WSDM.

[28]  Alexandros Karatzoglou,et al.  Personalizing Session-based Recommendations with Hierarchical Recurrent Neural Networks , 2017, RecSys.

[29]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[30]  Julian J. McAuley,et al.  Fusing Similarity Models with Markov Chains for Sparse Sequential Recommendation , 2016, 2016 IEEE 16th International Conference on Data Mining (ICDM).

[31]  Alexandros Karatzoglou,et al.  Parallel Recurrent Neural Network Architectures for Feature-rich Session-based Recommendations , 2016, RecSys.

[32]  Feng Yu,et al.  A Dynamic Recurrent Model for Next Basket Recommendation , 2016, SIGIR.

[33]  Alexandros Karatzoglou,et al.  Session-based Recommendations with Recurrent Neural Networks , 2015, ICLR.

[34]  Anton van den Hengel,et al.  Image-Based Recommendations on Styles and Substitutes , 2015, SIGIR.

[35]  George Karypis,et al.  FISM: factored item similarity models for top-N recommender systems , 2013, KDD.

[36]  Steffen Rendle,et al.  Factorization Machines , 2010, 2010 IEEE International Conference on Data Mining.

[37]  David Maxwell Chickering,et al.  Using Temporal Data for Making Recommendations , 2001, UAI.