Efficient Shapley Explanation for Features Importance Estimation Under Uncertainty

Complex deep learning models have shown their impressive power in analyzing high-dimensional medical image data. To increase the trust of applying deep learning models in medical field, it is essential to understand why a particular prediction was reached. Data feature importance estimation is an important approach to understand both the model and the underlying properties of data. Shapley value explanation (SHAP) is a technique to fairly evaluate input feature importance of a given model. However, the existing SHAP-based explanation works have limitations such as 1) computational complexity, which hinders their applications on high-dimensional medical image data; 2) being sensitive to noise, which can lead to serious errors. Therefore, we propose an uncertainty estimation method for the feature importance results calculated by SHAP. Then we theoretically justify the methods under a Shapley value framework. Finally we evaluate our methods on MNIST and a public neuroimaging dataset. We show the potential of our method to discover disease related biomarkers from neuroimaging data.

[1]  Dumitru Erhan,et al.  The (Un)reliability of saliency methods , 2017, Explainable AI.

[2]  Been Kim,et al.  Sanity Checks for Saliency Maps , 2018, NeurIPS.

[3]  Wojciech Samek,et al.  Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..

[4]  Johnny L. Matson,et al.  Review of gender differences in core symptomatology in autism spectrum disorders , 2011 .

[5]  Daniel P. Kennedy,et al.  The Autism Brain Imaging Data Exchange: Towards Large-Scale Evaluation of the Intrinsic Brain Architecture in Autism , 2013, Molecular Psychiatry.

[6]  Osamu Abe,et al.  Mitigation of sociocommunicational deficits of autism through oxytocin-induced recovery of medial prefrontal activity: a randomized trial. , 2014, JAMA psychiatry.

[7]  Helmut Hlavacs,et al.  Capturing the Essence: Towards the Automated Generation of Transparent Behavior Models , 2015, AIIDE.

[8]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[9]  J. LaSalle,et al.  Reduced MeCP2 Expression is Frequent in Autism Frontal Cortex and Correlates with Aberrant MECP2 Promoter Methylation , 2006, Epigenetics.

[10]  Somer Bishop,et al.  Sex and gender differences in autism spectrum disorder: summarizing evidence gaps and identifying emerging areas of priority , 2015, Molecular Autism.

[11]  Ankur Taly,et al.  Axiomatic Attribution for Deep Networks , 2017, ICML.

[12]  Kefeng Li,et al.  Antipurinergic therapy corrects the autism-like features in the Fragile X (Fmr1 knockout) mouse model , 2015, Molecular Autism.

[13]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[14]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[15]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[16]  L. Shapley A Value for n-person Games , 1988 .

[17]  Robert L. Hendren,et al.  Biomarkers in Autism , 2014, Front. Psychiatry.

[18]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.