Zero-Shot Next-Item Recommendation using Large Pretrained Language Models

Large language models (LLMs) have achieved impressive zero-shot performance in various natural language processing (NLP) tasks, demonstrating their capabilities for inference without training examples. Despite their success, no research has yet explored the potential of LLMs to perform next-item recommendations in the zero-shot setting. We have identified two major challenges that must be addressed to enable LLMs to act effectively as recommenders. First, the recommendation space can be extremely large for LLMs, and LLMs do not know about the target user's past interacted items and preferences. To address this gap, we propose a prompting strategy called Zero-Shot Next-Item Recommendation (NIR) prompting that directs LLMs to make next-item recommendations. Specifically, the NIR-based strategy involves using an external module to generate candidate items based on user-filtering or item-filtering. Our strategy incorporates a 3-step prompting that guides GPT-3 to carry subtasks that capture the user's preferences, select representative previously watched movies, and recommend a ranked list of 10 movies. We evaluate the proposed approach using GPT-3 on MovieLens 100K dataset and show that it achieves strong zero-shot performance, even outperforming some strong sequential recommendation models trained on the entire training dataset. These promising results highlight the ample research opportunities to use LLMs as recommenders. The code can be found at https://github.com/AGI-Edgerunners/LLM-Next-Item-Rec.

[1]  Yongfeng Zhang,et al.  Personalized Prompt Learning for Explainable Recommendation , 2022, ACM Trans. Inf. Syst..

[2]  S. Gu,et al.  Large Language Models are Zero-Shot Reasoners , 2022, NeurIPS.

[3]  Jingren Zhou,et al.  M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems , 2022, ArXiv.

[4]  Xi Victoria Lin,et al.  OPT: Open Pre-trained Transformer Language Models , 2022, ArXiv.

[5]  Jinyang Gao,et al.  Contrastive Learning for Sequential Recommendation , 2022, 2022 IEEE 38th International Conference on Data Engineering (ICDE).

[6]  Andrew M. Dai,et al.  PaLM: Scaling Language Modeling with Pathways , 2022, J. Mach. Learn. Res..

[7]  Yingqiang Ge,et al.  Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5) , 2022, RecSys.

[8]  Damien Sileo,et al.  Zero-Shot Recommendation as Language Modeling , 2021, ECIR.

[9]  Ruobing Xie,et al.  Personalized Prompts for Sequential Recommendation , 2022, ArXiv.

[10]  Chen Gao,et al.  Sequential Recommendation with Graph Neural Networks , 2021, SIGIR.

[11]  Xiangnan He,et al.  Self-supervised Graph Learning for Recommendation , 2020, SIGIR.

[12]  Jieqi Kang,et al.  Self-supervised Learning for Large-scale Item Recommendations , 2020, CIKM.

[13]  James Y. Zou,et al.  Language Models as Recommender Systems: Evaluations and Limitations , 2021 .

[14]  Ji-Rong Wen,et al.  S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization , 2020, CIKM.

[15]  James Caverlee,et al.  Next-item Recommendation with Sequential Hypergraphs , 2020, SIGIR.

[16]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[17]  Peng Jiang,et al.  BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer , 2019, CIKM.

[18]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[19]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[20]  Julian J. McAuley,et al.  Self-Attentive Sequential Recommendation , 2018, 2018 IEEE International Conference on Data Mining (ICDM).

[21]  Edward Y. Chang,et al.  Improving Sequential Recommendation with Knowledge-Enhanced Memory Networks , 2018, SIGIR.

[22]  Ke Wang,et al.  Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding , 2018, WSDM.

[23]  Julian J. McAuley,et al.  Fusing Similarity Models with Markov Chains for Sparse Sequential Recommendation , 2016, 2016 IEEE 16th International Conference on Data Mining (ICDM).

[24]  Alexandros Karatzoglou,et al.  Session-based Recommendations with Recurrent Neural Networks , 2015, ICLR.

[25]  F. Maxwell Harper,et al.  The MovieLens Datasets: History and Context , 2016, TIIS.

[26]  Lars Schmidt-Thieme,et al.  Factorizing personalized Markov chains for next-basket recommendation , 2010, WWW '10.