Fusion of multi-modality volumetric medical imagery

Ongoing efforts at our laboratory have targeted the development of techniques for fusing medical imagery of various modalities (i.e. MRI, CT, PET, SPECT, etc.) into single image products. Past results have demonstrated the potential for user performance improvements and workload reduction. While these are positive results, a need exists to address the three-dimensional nature of most medical image data sets. In particular, image fusion of three-dimensional imagery (e.g. MRI slices) must account for information content not only within a given slice but also across adjacent slices. In this paper, we describe extensions made to our 2D image fusion system that utilize 3D convolution kernels to determine locally relevant fusion parameters., Representative examples are presented for fusion of MRI and SPECT imagery. We also present these examples in the context of a GUI platform under development aimed at improving user-computer interaction for exploration and mining of medical data.