CBR-LIME: A Case-Based Reasoning Approach to Provide Specific Local Interpretable Model-Agnostic Explanations

Research on eXplainable AI has proposed several model agnostic algorithms, being LIME [14] (Local Interpretable Model-Agnostic Explanations) one of the most popular. LIME works by modifying the query input locally, so instead of trying to explain the entire model, the specific input instance is modified, and the impact on the predictions are monitored and used as explanations. Although LIME is general and flexible, there are some scenarios where simple perturbations are not enough, so there are other approaches like Anchor where perturbations variation depends on the dataset. In this paper, we propose a CBR solution to the problem of configuring the parameters of the LIME algorithm for the explanation of an image classifier. The case base reflects the human perception of the quality of the explanations generated with different parameter configurations of LIME. Then, this parameter configuration is reused for similar input images.

[1]  Emil Pitkin,et al.  Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation , 2013, 1309.6392.

[2]  Zachary C. Lipton,et al.  The mythos of model interpretability , 2018, Commun. ACM.

[3]  Rosina O. Weber,et al.  Investigating Textual Case-Based XAI , 2018, ICCBR.

[4]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[5]  Mark T. Keane,et al.  How Case-Based Reasoning Explains Neural Networks: A Theoretical Analysis of XAI Using Post-Hoc Explanation-by-Example from a Survey of ANN-CBR Twin-Systems , 2019, ICCBR.

[6]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[7]  Alan C. Bovik,et al.  A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms , 2006, IEEE Transactions on Image Processing.

[8]  Ryen W. White Opportunities and challenges in search interaction , 2018, Commun. ACM.

[9]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[10]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[11]  D Arul Suju,et al.  FLANN: Fast approximate nearest neighbour search algorithm for elucidating human-wildlife conflicts in forest areas , 2017, 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN).

[12]  Michael S. Bernstein,et al.  Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations , 2016, International Journal of Computer Vision.

[13]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[14]  Daniel S. Weld,et al.  The challenge of crafting intelligible intelligence , 2018, Commun. ACM.

[15]  Carlos Guestrin,et al.  Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.

[16]  David McSherry,et al.  Introduction to the Special Issue on Explanation in Case-Based Reasoning , 2005, Artificial Intelligence Review.

[17]  David B. Leake,et al.  CBR Confidence as a Basis for Confidence in Black Box Systems , 2019, ICCBR.

[18]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Padraig Cunningham,et al.  Explanation Oriented Retrieval , 2004, ECCBR.

[20]  Agnar Aamodt,et al.  Explanation in Case-Based Reasoning–Perspectives and Goals , 2005, Artificial Intelligence Review.

[21]  Santiago Ontañón,et al.  Structural plan similarity based on refinements in the space of partial plans , 2017, Comput. Intell..

[22]  Stefano Soatto,et al.  Quick Shift and Kernel Methods for Mode Seeking , 2008, ECCV.

[23]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[24]  Cynthia Rudin,et al.  Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions , 2017, AAAI.