A Combined Radio-Histological Approach for Classification of Low Grade Gliomas

Deep learning based techniques have shown to be beneficial for automating various medical image tasks like segmentation of lesions and automation of disease diagnosis. In this work, we demonstrate the utility of deep learning and radiomics features for classification of low grade gliomas (LGG) into astrocytoma and oligodendroglioma. In this study the objective is to use whole-slide H&E stained images and Magnetic Resonance (MR) images of the brain to make a prediction about the class of the glioma. We treat both the pathology and radiology datasets separately for in-depth analysis and then combine the predictions made by the individual models to get the final class label for a patient. The pre-processing of the whole slide images involved region of interest detection, stain normalization and patch extraction. An autoencoder was trained to extract features from each patch and these features are then used to find anomaly patches among the entire set of patches for a single Whole Slide Image. These anomaly patches from all the training slides form the dataset for training the classification model. A deep neural network based classification model was used to classify individual patches among the two classes. For the radiology dataset based analysis, each MRI scan was fed into a pre-processing pipeline which involved skull-stripping, co-registration of MR sequences to T1c, re-sampling of MR volumes to isotropic voxels and segmentation of brain lesion. The lesions in the MR volumes were automatically segmented using a fully convolutional Neural Network (CNN) trained on BraTS-2018 segmentation challenge dataset. From the segmentation maps 64\(\,\times \,\)64\(\,\times \,\)64 cube patches centered around the tumor were extracted from the T1 MR images for extraction of high level radiomic features. These features were then used to train a logistic regression classifier. After developing the two models, we used a confidence based prediction methodology to get the final class labels for each patient. This combined approach achieved a classification accuracy of 90% on the challenge test set (n = 20). These results showcase the emerging role of deep learning and radiomics in analyzing whole-slide images and MR scans for lesion characterization.

[1]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Erik Reinhard,et al.  Color Transfer between Images , 2001, IEEE Computer Graphics and Applications.

[3]  Zhi-Hua Zhou,et al.  Isolation Forest , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[4]  Konstantinos Kamnitsas,et al.  Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation , 2016, Medical Image Anal..

[5]  Brian B. Avants,et al.  The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) , 2015, IEEE Transactions on Medical Imaging.

[6]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[7]  Andriy Fedorov,et al.  Computational Radiomics System to Decode the Radiographic Phenotype. , 2017, Cancer research.