Abstracting Deep Neural Networks into Concept Graphs for Concept Level Interpretability

The black-box nature of deep learning models prevents them from being completely trusted in domains like biomedicine. Most explainability techniques do not capture the concept-based reasoning that human beings follow. In this work, we attempt to understand the behavior of trained models that perform image processing tasks in the medical domain by building a graphical representation of the concepts they learn. Extracting such a graphical representation of the model's behavior on an abstract, higher conceptual level would unravel the learnings of these models and would help us to evaluate the steps taken by the model for predictions. We show the application of our proposed implementation on two biomedical problems - brain tumor segmentation and fundus image classification. We provide an alternative graphical representation of the model by formulating a \textit{concept level graph} as discussed above, which makes the problem of intervention to find active inference trails more tractable. Understanding these trails would provide an understanding of the hierarchy of the decision-making process followed by the model. [As well as overall nature of model]. Our framework is available at \url{this https URL}

[1]  Yoshua Bengio,et al.  Deep Learning of Representations: Looking Forward , 2013, SLSP.

[2]  Sharon Lee Armstrong,et al.  What some concepts might not be , 1983, Cognition.

[3]  Allan R. Jones,et al.  Comprehensive cellular‐resolution atlas of the adult human brain , 2016, The Journal of comparative neurology.

[4]  P. Rousseeuw Silhouettes: a graphical aid to the interpretation and validation of cluster analysis , 1987 .

[5]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[6]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[7]  Christoph Molnar,et al.  Interpretable Machine Learning , 2020 .

[8]  Alexei A. Efros,et al.  The Unreasonable Effectiveness of Deep Features as a Perceptual Metric , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[9]  et al.,et al.  Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge , 2018, ArXiv.

[10]  Camelia-Mihaela Pintea,et al.  A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop , 2017, Creative Mathematics and Informatics.

[11]  Kyoung Mu Lee,et al.  Clustering Convolutional Kernels to Compress Deep Neural Networks , 2018, ECCV.

[12]  Andrea Vedaldi,et al.  Visualizing Deep Convolutional Neural Networks Using Natural Pre-images , 2015, International Journal of Computer Vision.

[13]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[14]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[15]  L. Aiello,et al.  Retinopathy in diabetes. , 2004, Diabetes care.

[16]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[17]  James Zou,et al.  Towards Automatic Concept-based Explanations , 2019, NeurIPS.

[18]  Tarek Khadir,et al.  Deep Convolutional Neural Networks Using U-Net for Automatic Brain Tumor Segmentation in Multimodal MRI Volumes , 2018, BrainLes@MICCAI.

[19]  Bolei Zhou,et al.  Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Erik D Roberson,et al.  Quantifying biomarkers of cognitive dysfunction and neuronal network hyperexcitability in mouse models of Alzheimer's disease: depletion of calcium-dependent proteins and inhibitory hippocampal remodeling. , 2011, Methods in molecular biology.

[21]  Ganapathy Krishnamurthi,et al.  Enhanced Image Classification With Data Augmentation Using Position Coordinates , 2018, ArXiv.

[22]  Andreas Holzinger,et al.  Interactive machine learning: experimental evidence for the human in the algorithmic loop , 2018, Applied Intelligence.

[23]  Abhishek Das,et al.  Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[24]  Lauren Wilcox,et al.  "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making , 2019, Proc. ACM Hum. Comput. Interact..

[25]  Andriy Myronenko,et al.  3D MRI brain tumor segmentation using autoencoder regularization , 2018, BrainLes@MICCAI.

[26]  Chun-Liang Li,et al.  On Concept-Based Explanations in Deep Neural Networks , 2019, ArXiv.

[27]  S. C. Johnson Hierarchical clustering schemes , 1967, Psychometrika.

[28]  Ganapathy Krishnamurthi,et al.  Ensemble of Fully Convolutional Neural Network for Brain Tumor Segmentation from Magnetic Resonance Images , 2018, BrainLes@MICCAI.

[29]  Johannes Gehrke,et al.  Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.

[30]  Ganapathy Krishnamurthi,et al.  Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis , 2020, Frontiers in Computational Neuroscience.

[31]  Cynthia Rudin,et al.  Methods and Models for Interpretable Linear Classification , 2014, ArXiv.

[32]  Konstantinos Kamnitsas,et al.  Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation , 2016, Medical Image Anal..

[33]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[34]  Deborah Silver,et al.  Feature Visualization , 1994, Scientific Visualization.

[35]  Christian Biemann,et al.  What do we need to build explainable AI systems for the medical domain? , 2017, ArXiv.

[36]  Guillermo Sapiro,et al.  Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy? , 2015, IEEE Transactions on Signal Processing.

[37]  Martin Wattenberg,et al.  Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.

[38]  Sébastien Ourselin,et al.  Automatic Brain Tumor Segmentation Using Cascaded Anisotropic Convolutional Neural Networks , 2017, BrainLes@MICCAI.