Fusion of multi-modality volumetric medical imagery
暂无分享,去创建一个
Ongoing efforts at our laboratory have targeted the development of techniques for fusing medical imagery of various modalities (i.e. MRI, CT, PET, SPECT, etc.) into single image products. Past results have demonstrated the potential for user performance improvements and workload reduction. While these are positive results, a need exists to address the three-dimensional nature of most medical image data sets. In particular, image fusion of three-dimensional imagery (e.g. MRI slices) must account for information content not only within a given slice but also across adjacent slices. In this paper, we describe extensions made to our 2D image fusion system that utilize 3D convolution kernels to determine locally relevant fusion parameters., Representative examples are presented for fusion of MRI and SPECT imagery. We also present these examples in the context of a GUI platform under development aimed at improving user-computer interaction for exploration and mining of medical data.
[1] S. Grossberg. Neural Networks and Natural Intelligence , 1988 .
[2] Allen M. Waxman,et al. Real-time fusion of low-light CCD and uncooled IR imagery for color night vision , 1998, Defense, Security, and Sensing.
[3] Nikos K Logothetis,et al. The color-opponent and broad-band channels of the primate visual system , 1990, Trends in Neurosciences.