Joint 2D-3D Breast Cancer Classification

Breast cancer is the malignant tumor that causes the highest number of cancer deaths in females. Digital mammograms (DM or 2D mammogram) and digital breast tomosynthesis (DBT or 3D mammogram) are the two types of mammography imagery that are used in clinical practice for breast cancer detection and diagnosis. Radiologists usually read both imaging modalities in combination; however, existing computer-aided diagnosis tools are designed using only one imaging modality. Inspired by clinical practice, we propose an innovative convolutional neural network (CNN) architecture for breast cancer classification, which uses both 2D and 3D mammograms, simultaneously. Our experiment shows that the proposed method significantly improves the performance of breast cancer classification. By assembling three CNN classifiers, the proposed model achieves 0.97 AUC, which is 34.72% higher than the methods using only one imaging modality.

[1]  M. Giger,et al.  Breast image analysis for risk assessment, detection, diagnosis, and treatment of cancer. , 2013, Annual review of biomedical engineering.

[2]  R. Smith,et al.  Breast cancer screening guidelines. , 1992, Women's health issues : official publication of the Jacobs Institute of Women's Health.

[3]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[4]  A. Jemal,et al.  Cancer statistics, 2019 , 2019, CA: a cancer journal for clinicians.

[5]  A. Leroux,et al.  Breast microcalcifications: the lesions in anatomical pathology. , 2014, Diagnostic and interventional imaging.

[6]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[7]  Sebastian Thrun,et al.  Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.

[8]  Nathan Jacobs,et al.  Automatic Hand Skeletal Shape Estimation from Radiographs , 2018, 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM).

[9]  Tinne Tuytelaars,et al.  Modeling video evolution for action recognition , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[11]  Emily F Conant,et al.  Breast cancer screening using tomosynthesis in combination with digital mammography. , 2014, JAMA.

[12]  István Csabai,et al.  Detecting and classifying lesions in mammograms with Deep Learning , 2017, Scientific Reports.

[13]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[14]  Yutaka Satoh,et al.  Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet? , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[15]  James W. Davis,et al.  The Recognition of Human Movement Using Temporal Templates , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Andrew Zisserman,et al.  Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Andrea Vedaldi,et al.  Dynamic Image Networks for Action Recognition , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Gareth M. James,et al.  Majority vote classifiers: theory and applications , 1998 .

[20]  Xiaoqin Wang,et al.  Classification of Whole Mammogram and Tomosynthesis Images Using Deep Convolutional Neural Networks , 2018, IEEE Transactions on NanoBioscience.

[21]  Andrew Zisserman,et al.  Improving Human Action Recognition Using Score Distribution and Ranking , 2014, ACCV.

[22]  A. Jemal,et al.  Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries , 2018, CA: a cancer journal for clinicians.

[23]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Larry H. Matthies,et al.  Pooled motion features for first-person videos , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Forrest N. Iandola,et al.  SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.

[26]  Hui Li,et al.  Transfer Learning From Convolutional Neural Networks for Computer-Aided Diagnosis: A Comparison of Digital Breast Tomosynthesis and Full-Field Digital Mammography. , 2019, Academic radiology.

[27]  Paul Wing,et al.  Workforce shortages in breast imaging: impact on mammography utilization. , 2009, AJR. American journal of roentgenology.

[28]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Jin Chen,et al.  GANai: Standardizing CT Images using Generative Adversarial Network with Alternative Improvement , 2018, bioRxiv.

[30]  Li Shen,et al.  Deep Learning to Improve Breast Cancer Detection on Screening Mammography , 2017, Scientific Reports.

[31]  Li Shen,et al.  End-to-end Training for Whole Image Breast Cancer Diagnosis using An All Convolutional Design , 2017, ArXiv.

[32]  Tinne Tuytelaars,et al.  Rank Pooling for Action Recognition , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.