Visually interpretable deep network for diagnosis of breast masses on mammograms

Recently, deep learning technology has achieved various successes in medical image analysis studies including computer-aided diagnosis (CADx). However, current CADx approaches based on deep learning have a limitation in interpreting diagnostic decisions. The limited interpretability is a major challenge for practical use of current deep learning approaches. In this paper, a novel visually interpretable deep network framework is proposed to provide diagnostic decisions with visual interpretation. The proposed method is motivated by the fact that the radiologists characterize breast masses according to the breast imaging reporting and data system (BIRADS). The proposed deep network framework consists of a BIRADS guided diagnosis network and a BIRADS critic network. A 2D map, named BIRADS guide map, is generated in the inference process of the deep network. The visual features extracted from the breast masses could be refined by the BIRADS guide map, which helps the deep network to focus on more informative areas. The BIRADS critic network makes the BIRADS guide map to be relevant to the characterization of masses in terms of BIRADS description. To verify the proposed method, comparative experiments have been conducted on public mammogram database. On the independent test set (170 malignant masses and 170 benign masses), the proposed method was found to have significantly higher performance compared to the deep network approach without using the BIRADS guide map (p  <  0.05). Moreover, the visualization was conducted to show the location where the deep network exploited more information. This study demonstrated that the proposed visually interpretable CADx framework could be a promising approach for visually interpreting the diagnostic decision of the deep network.

[1]  Yong Man Ro,et al.  Latent feature representation with 3-D multi-view deep convolutional neural network for bilateral analysis in digital breast tomosynthesis , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[2]  Richard H. Moore,et al.  THE DIGITAL DATABASE FOR SCREENING MAMMOGRAPHY , 2007 .

[3]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[4]  Lubomir M. Hadjiiski,et al.  Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis , 2018, Physics in medicine and biology.

[5]  Shuicheng Yan,et al.  A survey on deep learning-based fine-grained object classification and semantic segmentation , 2017, International Journal of Automation and Computing.

[6]  Lubomir M. Hadjiiski,et al.  Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. , 2016, Medical physics.

[7]  Yong Man Ro,et al.  Latent feature representation with depth directional long-term recurrent learning for breast masses in digital breast tomosynthesis , 2017, Physics in medicine and biology.

[8]  Andrew Zisserman,et al.  Deep Face Recognition , 2015, BMVC.

[9]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[10]  Yong Man Ro,et al.  Facial Dynamics Interpreter Network: What Are the Important Relations Between Local Dynamics for Facial Trait Estimation? , 2018, ECCV.

[11]  Bolei Zhou,et al.  Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[13]  Yong Man Ro,et al.  Spatio-temporal representation for face authentication by using multi-task learning with human attributes , 2016, 2016 IEEE International Conference on Image Processing (ICIP).

[14]  Bram van Ginneken,et al.  A survey on deep learning in medical image analysis , 2017, Medical Image Anal..

[15]  Xiaogang Wang,et al.  Residual Attention Network for Image Classification , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[16]  Andrea Vedaldi,et al.  Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Philip H. S. Torr,et al.  Learn To Pay Attention , 2018, ICLR.

[18]  Nico Karssemeijer,et al.  Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring , 2016, IEEE Transactions on Medical Imaging.

[19]  Gang Sun,et al.  Squeeze-and-Excitation Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[20]  Lubomir M. Hadjiiski,et al.  Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography. , 2016, Medical physics.

[21]  B Huynh,et al.  MO-DE-207B-06: Computer-Aided Diagnosis of Breast Ultrasound Images Using Transfer Learning From Deep Convolutional Neural Networks. , 2016, Medical physics.

[22]  Bolei Zhou,et al.  Expert identification of visual primitives used by CNNs during mammogram classification , 2018, Medical Imaging.

[23]  Ming Yang,et al.  DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[24]  N Karssemeijer,et al.  Use of border information in the classification of mammographic masses , 2006, Physics in medicine and biology.

[25]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[26]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[27]  K Doi,et al.  Effect of case selection on the performance of computer-aided detection schemes. , 1994, Medical physics.

[28]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Yong Man Ro,et al.  Facial dynamic modelling using long short-term memory network: Analysis and application to face authentication , 2016, 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS).

[30]  Max Welling,et al.  Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.

[31]  Yong Man Ro,et al.  ICADx: interpretable computer aided diagnosis of breast masses , 2018, Medical Imaging.

[32]  Bolei Zhou,et al.  Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[33]  Quanshi Zhang,et al.  Interpretable Convolutional Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[34]  Thomas Brox,et al.  Inverting Visual Representations with Convolutional Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[35]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[36]  Yong Man Ro,et al.  Classifier ensemble generation and selection with multiple feature representations for classification applications in computer-aided detection and diagnosis on mammography , 2016, Expert Syst. Appl..

[37]  Nico Karssemeijer,et al.  Large scale deep learning for computer aided detection of mammographic lesions , 2017, Medical Image Anal..

[38]  C. Metz,et al.  "Proper" Binormal ROC Curves: Theory and Maximum-Likelihood Estimation. , 1999, Journal of mathematical psychology.

[39]  A. Ramli,et al.  Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review. , 2013, Clinical imaging.

[40]  Lubomir M. Hadjiiski,et al.  Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms , 2017, Physics in medicine and biology.

[41]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[42]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.