groupShapley: Efficient prediction explanation with Shapley values for feature groups

Shapley values has established itself as one of the most appropriate and theoretically sound frameworks for explaining predictions from complex machine learning models. The popularity of Shapley values in the explanation setting is probably due to its unique theoretical properties. The main drawback with Shapley values, however, is that its computational complexity grows exponentially in the number of input features, making it unfeasible in many real world situations where there could be hundreds or thousands of features. Furthermore, with many (dependent) features, presenting/visualizing and interpreting the computed Shapley values also becomes challenging. The present paper introduces groupShapley: a conceptually simple approach for dealing with the aforementioned bottlenecks. The idea is to group the features, for example by type or dependence, and then compute and present Shapley values for these groups instead of for all individual features. Reducing hundreds or thousands of features to half a dozen or so, makes precise computations practically feasible and the presentation and knowledge extraction greatly simplified. We prove that under certain conditions, groupShapley is equivalent to summing the feature-wise Shapley values within each feature group. Moreover, we provide a simulation study exemplifying the differences when these conditions are not met. We illustrate the usability of the approach in a real world car insurance example, where groupShapley is used to provide simple and intuitive explanations.

[1]  Andreas Ziegler,et al.  ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R , 2015, 1508.04409.

[2]  Hugh Chen,et al.  From local explanations to global understanding with explainable AI for trees , 2020, Nature Machine Intelligence.

[3]  Jean-Luc Marichal,et al.  Axiomatic characterizations of generalized values , 2007, Discret. Appl. Math..

[4]  Le Song,et al.  L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data , 2018, ICLR.

[5]  Vito Fragnelli,et al.  Handbook of the Shapley Value , 2019 .

[6]  Kjersti Aas,et al.  Explaining individual predictions when features are dependent: More accurate approximations to Shapley values , 2019, Artif. Intell..

[7]  U. Grömping Estimators of Relative Importance in Linear Regression Based on Variance Decomposition , 2007 .

[8]  Kjersti Aas,et al.  Explaining predictive models using Shapley values and non-parametric vine copulas , 2021, Dependence Modeling.

[9]  Peter J. Rousseeuw,et al.  Finding Groups in Data: An Introduction to Cluster Analysis , 1990 .

[10]  Art B. Owen,et al.  On Shapley Value for Measuring Importance of Dependent Inputs , 2016, SIAM/ASA J. Uncertain. Quantification.

[11]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[12]  Kjersti Aas,et al.  Predicting mortgage default using convolutional neural networks , 2018, Expert Syst. Appl..

[13]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[14]  Yuan Zhou,et al.  Efficient Interpretation of Deep Learning Models Using Graph Structure and Cooperative Game Theory: Application to ASD Biomarker Discovery , 2019, IPMI.

[15]  Joseph D. Janizek,et al.  True to the Model or True to the Data? , 2020, ArXiv.

[16]  M. Grabisch,et al.  Transversality of the Shapley value , 2008 .

[17]  Mukund Sundararajan,et al.  The many Shapley values for model explanation , 2019, ICML.

[18]  Kjersti Aas,et al.  Explaining predictive models with mixed features using Shapley values and conditional inference trees , 2020, CD-MAKE.

[19]  Su-In Lee,et al.  Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression , 2020, AISTATS.

[20]  L. Shapley A Value for n-person Games , 1988 .

[21]  Martin Jullum,et al.  shapr: An R-package for explaining machine learning models with dependence-aware Shapley values , 2019, J. Open Source Softw..

[22]  Dominik Janzing,et al.  Feature relevance quantification in explainable AI: A causality problem , 2019, AISTATS.