Deep learning in breast cancer risk assessment: evaluation of fine-tuned convolutional neural networks on a clinical dataset of FFDMs

We evaluated the potential of deep learning in the assessment of breast cancer risk using convolutional neural networks (CNNs) fine-tuned on full-field digital mammographic (FFDM) images. This study included 456 clinical FFDM cases from two high-risk datasets: BRCA1/2 gene-mutation carriers (53 cases) and unilateral cancer patients (75 cases), and a low-risk dataset as the control group (328 cases). All FFDM images (12-bit quantization and 100 micron pixel) were acquired with a GE Senographe 2000D system and were retrospectively collected under an IRB-approved, HIPAA-compliant protocol. Regions of interest of 256x256 pixels were selected from the central breast region behind the nipple in the craniocaudal projection. VGG19 pre-trained on the ImageNet dataset was used to classify the images either as high-risk or as low-risk subjects. The last fully-connected layer of pre-trained VGG19 was fine-tuned on FFDM images for breast cancer risk assessment. Performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) in the task of distinguishing between high-risk and low-risk subjects. AUC values of 0.84 (SE=0.05) and 0.72 (SE=0.06) were obtained in the task of distinguishing between the BRCA1/2 gene-mutation carriers and low-risk women and between unilateral cancer patients and low-risk women, respectively. Deep learning with CNNs appears to be able to extract parenchymal characteristics directly from FFDMs which are relevant to the task of distinguishing between cancer risk populations, and therefore has potential to aid clinicians in assessing mammographic parenchymal patterns for cancer risk assessment.

[1]  Michael J. Carston,et al.  Texture Features from Mammographic Images and Risk of Breast Cancer , 2009, Cancer Epidemiology Biomarkers & Prevention.

[2]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[3]  C. Metz ROC Methodology in Radiologic Imaging , 1986, Investigative radiology.

[4]  N F Boyd,et al.  Automated analysis of mammographic densities and breast carcinoma risk , 1997, Cancer.

[5]  A. Jemal,et al.  Cancer statistics, 2017 , 2017, CA: a cancer journal for clinicians.

[6]  J. Wolfe Breast patterns as an index of risk for developing breast cancer. , 1976, AJR. American journal of roentgenology.

[7]  Berkman Sahiner,et al.  Association of computerized mammographic parenchymal pattern measure with breast cancer risk: a pilot case-control study. , 2011, Radiology.

[8]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[9]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[10]  J. Wolfe Risk for breast cancer development determined by mammographic parenchymal pattern , 1976, Cancer.

[11]  Andrew D. A. Maidment,et al.  Analysis of parenchymal texture with digital breast tomosynthesis: comparison with digital mammography and implications for cancer risk assessment. , 2011, Radiology.

[12]  Li Lan,et al.  Comparative analysis of image-based phenotypes of mammographic density and parenchymal patterns in distinguishing between BRCA1/2 cases, unilateral cancer cases, and controls , 2014, Journal of medical imaging.

[13]  Lorenzo L. Pesce,et al.  Reliable and computationally efficient maximum-likelihood estimation of "proper" binormal ROC curves. , 2007, Academic radiology.