Visual interpretation of CNN decision-making process using Simulated Brain MRI

Convolutional neural networks (CNNs) are being extensively used to analyze medical images given the remarkable performances achieved so far. Due to the non-transparent decision-making process, CNNs are thought to be black boxes, so hindering their applicability. We submit a novel visualization technique to shed light on CNNs decisions in a classification task. Brain magnetic resonance images are fed as input to an original 3D CNN to allow discrimination of normal against modified brain data. This modification targets specific brain regions by linearly increasing their intensity, and involves regions with very different features in dimension, position, and enclosed tissues. The proposed visualization method merges all convolutional layers output in order to highlight where the model is “looking” during the decision-making process. Our visualizations allow to recover the same areas modified in the images, thus proving they are relevant to the prediction as expected. Comparing results from models with different accuracy, show that even in the case of low performance the expected regions are present in the activation maps leading the way to ameliorations of the CNN architecture.