Hazards of data leakage in machine learning: a study on classification of breast cancer using deep neural networks

With the renewed interest in developing machine learning methods for medical imaging using deep-learning approaches, it is essential to reexamine data leakage. In this study, we simulated data leakage in the form of feature leakage, where a classifier was trained on the training set, but the feature selection was influenced by the performance on the validation set. A pre-trained deep-learning convolutional neural network (DCNN) without fine-tuning was used as a feature extractor for malignant and benign mass classification in mammography. A feature selection algorithm was trained in the wrapper mode with a cost function tuned to follow the performance metric on the validation set. Linear discriminant analysis (LDA) classifier was trained to classify masses on mammographic patches. Mammograms from 1,882 patient cases with 4,577 unique patches were partitioned by patient into 3,222 for training and 508 for validation, while 847 were sequestered as unseen independent test set to evaluate the generalization error. The effects of the finite sample size on data leakage were studied by varying the training and validation set sizes from 10% to 100% of the available sets. The area under the receiver operating characteristic curve (AUC) was used as the performance metric. The results show that the performance on the validation set could be overestimated, having AUCs of 0.75 to 0.99 for various sample sizes, whereas the independent test performance could realistically only reach an AUC of 0.72. The analysis indicates that deep learning can risk a high inflation in performance and proper housekeeping rules should be followed when designing and developing deep learning methods in medical imaging.