Visual interpretability in 3D brain tumor segmentation network

Medical image segmentation is a complex yet one of the most essential tasks for diagnostic procedures such as brain tumor detection. Several 3D Convolutional Neural Network (CNN) architectures have achieved remarkable results in brain tumor segmentation. However, due to the black-box nature of CNNs, the integration of such models to make decisions about diagnosis and treatment is high-risk in the domain of healthcare. It is difficult to explain the rationale behind the model's predictions due to the lack of interpretability. Hence, the successful deployment of deep learning models in the medical domain requires accurate as well as transparent predictions. In this paper, we generate 3D visual explanations to analyze the 3D brain tumor segmentation model by extending a post-hoc interpretability technique. We explore the advantages of a gradient-free interpretability approach over gradient-based approaches. Moreover, we interpret the behavior of the segmentation model with respect to the input Magnetic Resonance Imaging (MRI) images and investigate the prediction strategy of the model. We also evaluate the interpretability methodology quantitatively for medical image segmentation tasks. To deduce that our visual explanations do not represent false information, we validate the extended methodology quantitatively. We learn that the information captured by the model is coherent with the domain knowledge of human experts, making it more trustworthy. We use the BraTS-2018 dataset to train the 3D brain tumor segmentation network and perform interpretability experiments to generate visual explanations.

[1]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[2]  Ganapathy Krishnamurthi,et al.  Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis , 2020, Frontiers in Computational Neuroscience.

[3]  Remco C. Veltkamp,et al.  Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions , 2019, 2019 IEEE International Conference on Image Processing (ICIP).

[4]  Francisco Herrera,et al.  Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.

[5]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[6]  Victor Alves,et al.  Understanding and Interpreting Machine Learning in Medical Image Computing Applications , 2018, Lecture Notes in Computer Science.

[7]  Arcot Sowmya,et al.  Automated Brain Tumor Segmentation Using Multimodal Brain Scans: A Survey Based on Models Submitted to the BraTS 2012–2018 Challenges , 2020, IEEE Reviews in Biomedical Engineering.

[8]  Anuj Bhardwaj,et al.  A review on brain tumor segmentation of MRI images. , 2019, Magnetic resonance imaging.

[9]  Amina Adadi,et al.  Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.

[10]  Quanshi Zhang,et al.  Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.

[11]  Alexander Binder,et al.  On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.

[12]  D. Navon Forest before trees: The precedence of global features in visual perception , 1977, Cognitive Psychology.

[13]  Christopher Joseph Pal,et al.  Brain tumor segmentation with Deep Neural Networks , 2015, Medical Image Anal..

[14]  Brian B. Avants,et al.  The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) , 2015, IEEE Transactions on Medical Imaging.

[15]  Liu Jin,et al.  A survey of MRI-based brain tumor segmentation methods , 2014 .

[16]  Carlos Alberto Silva,et al.  Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. , 2016, IEEE transactions on medical imaging.

[17]  Victor Alves,et al.  Enhancing interpretability of automatically extracted machine learning features: application to a RBM‐Random Forest system on brain lesion segmentation , 2018, Medical Image Anal..

[18]  Wojciech Samek,et al.  Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..