Adherence and Constancy in LIME-RS Explanations for Recommendation (Long paper)

Explainable Recommendation has attracted a lot of attention due to a renewed interest in explainable artificial intelligence. In particular, post-hoc approaches have proved to be the most easily applicable ones to increasingly complex recommendation models, which are then treated as black boxes. The most recent literature has shown that for post-hoc explanations based on local surrogate models, there are problems related to the robustness of the approach itself. This consideration becomes even more relevant in human-related tasks like recommendation. The explanation also has the arduous task of enhancing increasingly relevant aspects of user experience such as transparency or trustworthiness. This paper aims to show how the characteristics of a classical post-hoc model based on surrogates is strongly model-dependent and does not prove to be accountable for the explanations generated.

[1]  Tommaso Di Noia,et al.  Elliot: A Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation , 2021, SIGIR.

[2]  Francesco M. Donini,et al.  Explanation in Multi-Stakeholder Recommendation for Enterprise Decision Support Systems , 2021, CAiSE Workshops.

[3]  Xin Li,et al.  Explainable Recommendation via Interpretable Feature Mapping and Evaluation of Explainability , 2020, IJCAI.

[4]  Walid Krichene,et al.  On Sampled Metrics for Item Recommendation , 2020, KDD.

[5]  Tim Menzies,et al.  Making Fair ML Software using Trustworthy Explanation , 2020, 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[6]  Nicholas Capel,et al.  Improving LIME Robustness with Smarter Locality Sampling , 2020, ArXiv.

[7]  Xue Feng,et al.  Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection , 2020, ICLR.

[8]  Himabindu Lakkaraju,et al.  Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods , 2019, AIES.

[9]  Xu Chen,et al.  Explainable Recommendation: A Survey and New Perspectives , 2018, Found. Trends Inf. Retr..

[10]  Fabio Gagliardi Cozman,et al.  Explanations within Conversational Recommendation Systems: Improving Coverage through Knowledge Graph Embeddings , 2020 .

[11]  Tommaso Di Noia,et al.  How to make latent factors interpretable by feeding Factorization machines with knowledge graphs , 2019, SEMWEB.

[12]  Masataka Goto,et al.  DualDiv: diversifying items and explanation styles in explainable hybrid recommendation , 2019, RecSys.

[13]  Tommaso Di Noia,et al.  On the discriminative power of hyper-parameters in cross-validation and how to choose them , 2019, RecSys.

[14]  Michalis Vlachos,et al.  RecoNet: An Interpretable Neural Architecture for Recommender Systems , 2019, IJCAI.

[15]  Yongfeng Zhang,et al.  Personalized Fashion Recommendation with Visual Explanations based on Multimodal Attention Network: Towards Visually Explainable Recommendation , 2019, SIGIR.

[16]  Xing Xie,et al.  Explainable Recommendation through Attentive Multi-View Learning , 2019, AAAI.

[17]  Hongning Wang,et al.  The FacT: Taming Latent Factor Models for Explainability with Factorization Trees , 2019, SIGIR.

[18]  Leandro Balby Marinho,et al.  Towards explaining recommendations through local surrogate models , 2019, SAC.

[19]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[20]  Xing Xie,et al.  A Reinforcement Learning Framework for Explainable Recommendation , 2018, 2018 IEEE International Conference on Data Mining (ICDM).

[21]  Jun Wang,et al.  Explanation Mining: Post Hoc Interpretability of Latent Factor Models for Recommendation Systems , 2018, KDD.

[22]  Tommi S. Jaakkola,et al.  On the Robustness of Interpretability Methods , 2018, ArXiv.

[23]  Tommi S. Jaakkola,et al.  Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.

[24]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[25]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[26]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[27]  F. Maxwell Harper,et al.  The MovieLens Datasets: History and Context , 2016, TIIS.

[28]  Judith Masthoff,et al.  Explaining Recommendations: Design and Evaluation , 2015, Recommender Systems Handbook.

[29]  A. M. Madni,et al.  Recommender systems in e-commerce , 2014, 2014 World Automation Congress (WAC).

[30]  Guokun Lai,et al.  Explicit factor models for explainable recommendation based on phrase-level sentiment analysis , 2014, SIGIR.

[31]  Saul Vargas,et al.  Novelty and diversity enhancement and evaluation in recommender systems and information retrieval , 2014, SIGIR.

[32]  Mouzhi Ge,et al.  How should I explain? A comparison of different explanation types for recommender systems , 2014, Int. J. Hum. Comput. Stud..

[33]  Patrick Seemann,et al.  Matrix Factorization Techniques for Recommender Systems , 2014 .

[34]  Erik Strumbelj,et al.  Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.

[35]  Yehuda Koren,et al.  Matrix Factorization Techniques for Recommender Systems , 2009, Computer.

[36]  Judith Masthoff,et al.  A Survey of Explanations in Recommender Systems , 2007, 2007 IEEE 23rd International Conference on Data Engineering Workshop.

[37]  Sean M. McNee,et al.  Being accurate is not enough: how accuracy metrics have hurt recommender systems , 2006, CHI Extended Abstracts.