Explainable AI for medical imaging: deep-learning CNN ensemble for classification of estrogen receptor status from breast MRI

Deep-learning convolutional neural networks (DCNNs) are the most commonly used approach in medical image analysis tasks at present; however, they have largely been used as black-box predictors, lacking explanation for the underlying reasons. Explainable artificial intelligence (XAI) is an emerging subfield of AI seeking to understand how models make their decisions. In this work, we applied XAI visualization to gain an insight into the features learned by a DCNN trained to classify estrogen receptor status (ER+ vs ER-) based on dynamic contrast-enhanced magnetic resonance imaging (DCEMRI) of the breast. Our data set contained 1395 ER+ regions-of-interest (ROIs) and 729 ER- ROIs from 148 patients, each with a pre-contrast scan and a minimum of two post-contrast scans. We developed a novel transfer-trained dual-domain DCNN architecture derived from the AlexNet model trained on ImageNet data that received the spatial (across the volume) and dynamic (across the acquisition sequence) components of each DCE-MRI ROI as input. The network’s performance was evaluated with the area under the receiver operating characteristic curve (AUC) from leave-one-case-out crossvalidation. To visualize the DCNN learning, we applied XAI techniques, including the Integrated Gradients attribution method and the SmoothGrad noise reduction algorithm, to the ROIs from the training set. We observed that our DCNN learned relevant features from the spatial and dynamic domains, but there were differences in the contributing features from the two domains. We also visualized DCNN learning from irrelevant features resulting from pre-processing artifacts. These observations motivate new approaches to pre-processing our data and training our DCNN.