Multi-modal data fusion schemes for integrated classification of imaging and non-imaging biomedical data

With a wide array of multi-modal, multi-protocol, and multi-scale biomedical data available for disease diagnosis and prognosis, there is a need for quantitative tools to combine such varied channels of information, especially imaging and non-imaging data (e.g. spectroscopy, proteomics). The major problem in such quantitative data integration lies in reconciling the large spread in the range of dimensionalities and scales across the different modalities. The primary goal of quantitative data integration is to build combined meta-classifiers; however these efforts are thwarted by challenges in (1) homogeneous representation of the data channels, (2) fusing the attributes to construct an integrated feature vector, and (3) the choice of learning strategy for training the integrated classifier. In this paper, we seek to (a) define the characteristics that guide the 4 independent methods for quantitative data fusion that use the idea of a meta-space for building integrated multi-modal, multi-scale meta-classifiers, and (b) attempt to understand the key components which allowed each method to succeed. These methods include (1) Generalized Embedding Concatenation (GEC), (2) Consensus Embedding (CE), (3) Semi-Supervised Multi-Kernel Graph Embedding (SeSMiK), and (4) Boosted Embedding Combination (BEC). In order to evaluate the optimal scheme for fusing imaging and non-imaging data, we compared these 4 schemes for the problems of combining (a) multi-parametric MRI with spectroscopy for prostate cancer (CaP) diagnosis in vivo, and (b) histological image with proteomic signatures (obtained via mass spectrometry) for predicting prognosis in CaP patients. The kernel combination approach (SeSMiK) marginally outperformed the embedding combination schemes. Additionally, intelligent weighting of the data channels (based on their relative importance) appeared to outperform unweighted strategies. All 4 strategies easily outperformed a naïve decision fusion approach, suggesting that data integration methods will play an important role in the rapidly emerging field of integrated diagnostics and personalized healthcare.

[1]  George Lee,et al.  A knowledge representation framework for integration, classification of multi-scale imaging and non-imaging data: Preliminary results in predicting prostate cancer recurrence by fusing mass spectrometry and histology , 2009, 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro.

[2]  Kazuyuki Aihara,et al.  Sequential Data Fusion via Vector Spaces: Fusion of Heterogeneous Data in the Complex Domain , 2007, J. VLSI Signal Process..

[3]  George Lee,et al.  Semi-Supervised Graph Embedding Scheme with Active Learning (SSGEAL): Classifying High Dimensional Biomedical Data , 2010, PRIB.

[4]  Anant Madabhushi,et al.  A consensus embedding approach for segmentation of high resolution in vivo prostate magnetic resonance imagery , 2008, SPIE Medical Imaging.

[5]  Piotr J Slomka,et al.  Software Approach to Merging Molecular with Anatomic Information , 2004 .

[6]  L. Breiman Arcing Classifiers , 1998 .

[7]  L. Breiman Arcing classifier (with discussion and a rejoinder by the author) , 1998 .

[8]  Xin Liu,et al.  Prostate Cancer Segmentation With Simultaneous Estimation of Markov Random Field Parameters and Class , 2009, IEEE Transactions on Medical Imaging.

[9]  Nello Cristianini,et al.  Kernel-Based Data Fusion and Its Application to Protein Function Prediction in Yeast , 2003, Pacific Symposium on Biocomputing.

[10]  Haitao Zhao Combining labeled and unlabeled data with graph embedding , 2006, Neurocomputing.

[11]  Anant Madabhushi,et al.  Semi Supervised Multi Kernel (SeSMiK) Graph Embedding: Identifying Aggressive Prostate Cancer via Magnetic Resonance Imaging and Spectroscopy , 2010, MICCAI.