RecoNet: An Interpretable Neural Architecture for Recommender Systems

Neural systems offer high predictive accuracy but are plagued by long training times and low interpretability. We present a simple neural architecture for recommender systems that lifts several of these shortcomings. Firstly, the approach has a high predictive power that is comparable to state-of-the-art recommender approaches. Secondly, owing to its simplicity, the trained model can be interpreted easily because it provides the individual contribution of each input feature to the decision. Our method is three orders of magnitude faster than generalpurpose explanatory approaches, such as LIME. Finally, thanks to its design, our architecture addresses cold-start issues, and therefore the model does not require retraining in the presence of new users.

[1]  Dinh Phung,et al.  Journal of Machine Learning Research: Preface , 2014 .

[2]  Tat-Seng Chua,et al.  Neural Collaborative Filtering , 2017, WWW.

[3]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[4]  Angela Repanovici,et al.  Expert Systems with Applications in the Legal Domain , 2015 .

[5]  Tat-Seng Chua,et al.  Item Silk Road: Recommending Items from Information Domains to Social Users , 2017, SIGIR.

[6]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[7]  Gabriella Kazai,et al.  Advances in Information Retrieval , 2015, Lecture Notes in Computer Science.

[8]  Heng-Tze Cheng,et al.  Wide & Deep Learning for Recommender Systems , 2016, DLRS@RecSys.

[9]  Lars Schmidt-Thieme,et al.  BPR: Bayesian Personalized Ranking from Implicit Feedback , 2009, UAI.

[10]  Alexander Binder,et al.  On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.

[11]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.

[12]  Hugo Larochelle,et al.  A Meta-Learning Perspective on Cold-Start Recommendations for Items , 2017, NIPS.

[13]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[14]  Kin Keung Lai,et al.  Credit risk assessment with a multistage neural network ensemble learning approach , 2008, Expert Syst. Appl..

[15]  Qiang Yang,et al.  One-Class Collaborative Filtering , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[16]  Dong Yu,et al.  Deep Crossing: Web-Scale Modeling without Manually Crafted Combinatorial Features , 2016, KDD.

[17]  Jason Weston,et al.  StarSpace: Embed All The Things! , 2017, AAAI.

[18]  Patrick Seemann,et al.  Matrix Factorization Techniques for Recommender Systems , 2014 .

[19]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[20]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.