Faithfully Explaining Rankings in a News Recommender System

There is an increasing demand for algorithms to explain their outcomes. So far, there is no method that explains the rankings produced by a ranking algorithm. To address this gap we propose LISTEN, a LISTwise ExplaiNer, to explain rankings produced by a ranking algorithm. To efficiently use LISTEN in production, we train a neural network to learn the underlying explanation space created by LISTEN; we call this model Q-LISTEN. We show that LISTEN produces faithful explanations and that Q-LISTEN is able to learn these explanations. Moreover, we show that LISTEN is safe to use in a real world environment: users of a news recommendation system do not behave significantly differently when they are exposed to explanations generated by LISTEN instead of manually generated explanations.

[1]  Yiqun Liu,et al.  Hierarchical feature selection for ranking , 2010, WWW '10.

[2]  Stephen E. Robertson,et al.  A new rank correlation coefficient for information retrieval , 2008, SIGIR '08.

[3]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[4]  Robert X. Gao,et al.  PCA-based feature selection scheme for machine defect classification , 2004, IEEE Transactions on Instrumentation and Measurement.

[5]  M. de Rijke,et al.  Do News Consumers Want Explanations for Personalized News Rankings , 2017 .

[6]  John Riedl,et al.  Explaining collaborative filtering recommendations , 2000, CSCW '00.

[7]  Li Chen,et al.  Trust-inspiring explanation interfaces for recommender systems , 2007, Knowl. Based Syst..

[8]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[9]  Tim Miller,et al.  Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.

[10]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[11]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[12]  Roberto Battiti,et al.  Using mutual information for selecting features in supervised neural net learning , 1994, IEEE Trans. Neural Networks.

[13]  Yong Tang,et al.  FSMRank: Feature Selection Algorithm for Learning to Rank , 2013, IEEE Transactions on Neural Networks and Learning Systems.

[14]  Taher H. Haveliwala Topic-Sensitive PageRank: A Context-Sensitive Ranking Algorithm for Web Search , 2003, IEEE Trans. Knowl. Data Eng..

[15]  C. Spearman The proof and measurement of association between two things. , 2015, International journal of epidemiology.

[16]  Behnoush Abdollahi,et al.  Accurate and justifiable : new algorithms for explainable recommendations. , 2017 .

[17]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[18]  Judith Masthoff,et al.  A Survey of Explanations in Recommender Systems , 2007, 2007 IEEE 23rd International Conference on Data Engineering Workshop.

[19]  Nava Tintarev,et al.  Explaining Recommendations , 2007, User Modeling.

[20]  Yotam Hechtlinger,et al.  Interpretation of Prediction Models Using the Input Gradient , 2016, ArXiv.

[21]  Tie-Yan Liu,et al.  Learning to Rank for Information Retrieval , 2011 .

[22]  Francesco Romani,et al.  Ranking a stream of news , 2005, WWW '05.

[23]  M. de Rijke,et al.  Finding Influential Training Samples for Gradient Boosted Decision Trees , 2018, ICML.

[24]  Been Kim,et al.  Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.

[25]  Christopher J. C. Burges,et al.  From RankNet to LambdaRank to LambdaMART: An Overview , 2010 .

[26]  Tao Qin,et al.  Feature selection for ranking , 2007, SIGIR.

[27]  M. Kendall A NEW MEASURE OF RANK CORRELATION , 1938 .

[28]  Rajeev Motwani,et al.  The PageRank Citation Ranking : Bringing Order to the Web , 1999, WWW 1999.

[29]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[30]  John Riedl,et al.  Tagsplanations: explaining recommendations using tags , 2009, IUI.

[31]  Josiane Mothe,et al.  Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[32]  Pasquale Lops,et al.  ExpLOD: A Framework for Explaining Recommendations based on the Linked Open Data Cloud , 2016, RecSys.

[33]  Trevor Darrell,et al.  Generating Visual Explanations , 2016, ECCV.

[34]  C. Spearman The proof and measurement of association between two things. By C. Spearman, 1904. , 1987, The American journal of psychology.

[35]  Andrew Slavin Ross,et al.  Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.

[36]  Percy Liang,et al.  Understanding Black-box Predictions via Influence Functions , 2017, ICML.

[37]  Raymond J. Mooney,et al.  Explaining Recommendations: Satisfaction vs. Promotion , 2005 .

[38]  Huan Liu,et al.  Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution , 2003, ICML.