A model of multimodal fusion for medical applications

Content-based image retrieval has been applied to many different biomedical applications1. In almost all cases, retrievals involve a single query image of a particular modality and retrieved images are from this same modality. For example, one system may retrieve color images from eye exams, while another retrieves fMRI images of the brain. Yet real patients often have had tests from multiple different modalities, and retrievals based on more than one modality could provide information that single modality searches fail to see. In this paper, we show medical image retrieval for two different single modalities and propose a model for multimodal fusion that will lead to improved capabilities for physicians and biomedical researchers. We describe a graphical user interface for multimodal retrieval that is being tested by real biomedical researchers in several different fields.