ReXPlug: Explainable Recommendation using Plug-and-Play Language Model

Explainable Recommendations provide the reasons behind why an item is recommended to a user, which often leads to increased user satisfaction and persuasiveness. An intuitive way to explain recommendations is by generating a synthetic personalized natural language review for a user-item pair. Although there exist some approaches in the literature that explain recommendations by generating reviews, the quality of the reviews is questionable. Besides, these methods usually take considerable time to train the underlying language model responsible for generating the text. In this work, we propose ReXPlug, an end-to-end framework with a plug and play way of explaining recommendations. ReXPlug predicts accurate ratings as well as exploits Plug and Play Language Model to generate high-quality reviews. We train a simple sentiment classifier for controlling a pre-trained language model for the generation, bypassing the language model's training from scratch again. Such a simple and neat model is much easier to implement and train, and hence, very efficient for generating reviews. We personalize the reviews by leveraging a special jointly-trained cross attention network. Our detailed experiments show that ReXPlug outperforms many recent models across various datasets on rating prediction by utilizing textual reviews as a regularizer. Quantitative analysis shows that the reviews generated by ReXPlug are semantically close to the ground truth reviews, while the qualitative analysis demonstrates the high quality of the generated reviews, both from empirical and analytical viewpoints. Our implementation is available online.

[1]  Noveen Sachdeva,et al.  How Useful are Reviews for Recommendation? A Critical Review and Potential Improvements , 2020, SIGIR.

[2]  Jianmo Ni,et al.  Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects , 2019, EMNLP.

[3]  J. Yosinski,et al.  Plug and Play Language Models: A Simple Approach to Controlled Text Generation , 2019, ICLR.

[4]  Lav R. Varshney,et al.  CTRL: A Conditional Transformer Language Model for Controllable Generation , 2019, ArXiv.

[5]  Iryna Gurevych,et al.  Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks , 2019, EMNLP.

[6]  Omer Levy,et al.  RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.

[7]  Jun Chang,et al.  DAML: Dual Attention Mutual Learning between Ratings and Reviews for Item Recommendation , 2019, KDD.

[8]  Xing Xie,et al.  Explainable Recommendation through Attentive Multi-View Learning , 2019, AAAI.

[9]  Yiming Yang,et al.  XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.

[10]  Hongning Wang,et al.  The FacT: Taming Latent Factor Models for Explainability with Factorization Trees , 2019, SIGIR.

[11]  Xing Xie,et al.  Co-Attentive Multi-Task Learning for Explainable Recommendation , 2019, IJCAI.

[12]  Nan Hua,et al.  Universal Sentence Encoder for English , 2018, EMNLP.

[13]  Samy Bengio,et al.  Content preserving text generation with attribute controls , 2018, NeurIPS.

[14]  Barry Smyth,et al.  Why I like it: multi-task learning for recommendation and explanation , 2018, RecSys.

[15]  Yue Yin,et al.  Explainable Recommendation via Multi-Task Learning in Opinionated Text Data , 2018, SIGIR.

[16]  Yiqun Liu,et al.  Neural Attentional Rating Regression with Review-level Explanations , 2018, WWW.

[17]  Xu Chen,et al.  Explainable Recommendation: A Survey and New Perspectives , 2018, Found. Trends Inf. Retr..

[18]  Siu Cheung Hui,et al.  Multi-Pointer Co-Attention Networks for Recommendation , 2018, KDD.

[19]  Piji Li,et al.  Neural Rating Regression with Abstractive Tips Generation for Recommendation , 2017, SIGIR.

[20]  Peter Dolog,et al.  Automatic Generation of Natural Language Explanations , 2017, IUI Companion.

[21]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[22]  William W. Cohen,et al.  TransNets: Learning to Transform for Recommendation , 2017, RecSys.

[23]  Ilya Sutskever,et al.  Learning to Generate Reviews and Discovering Sentiment , 2017, ArXiv.

[24]  Tat-Seng Chua,et al.  Neural Collaborative Filtering , 2017, WWW.

[25]  Eric P. Xing,et al.  Controllable Text Generation , 2017, ArXiv.

[26]  Lei Zheng,et al.  Joint Deep Modeling of Users and Items Using Reviews for Recommendation , 2017, WSDM.

[27]  Ben Poole,et al.  Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.

[28]  Julian J. McAuley,et al.  Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering , 2016, WWW.

[29]  Jianfeng Gao,et al.  A Diversity-Promoting Objective Function for Neural Conversation Models , 2015, NAACL.

[30]  Hal Daumé,et al.  Deep Unordered Composition Rivals Syntactic Methods for Text Classification , 2015, ACL.

[31]  Anton van den Hengel,et al.  Image-Based Recommendations on Styles and Substitutes , 2015, SIGIR.

[32]  Yoon Kim,et al.  Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.

[33]  Guokun Lai,et al.  Explicit factor models for explainable recommendation based on phrase-level sentiment analysis , 2014, SIGIR.

[34]  Aaron C. Courville,et al.  Generative adversarial networks , 2014, Commun. ACM.

[35]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[36]  Jure Leskovec,et al.  Hidden factors and hidden topics: understanding rating dimensions with review text , 2013, RecSys.

[37]  Jure Leskovec,et al.  From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews , 2013, WWW.

[38]  Jure Leskovec,et al.  Learning Attitudes and Attributes from Multi-aspect Reviews , 2012, 2012 IEEE 12th International Conference on Data Mining.

[39]  Yehuda Koren,et al.  Matrix Factorization Techniques for Recommender Systems , 2009, Computer.

[40]  Dragomir R. Radev,et al.  LexRank: Graph-based Lexical Centrality as Salience in Text Summarization , 2004, J. Artif. Intell. Res..

[41]  Salim Roukos,et al.  Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.

[42]  S. Hochreiter,et al.  Long Short-Term Memory , 1997, Neural Computation.

[43]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[44]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .