Information fusion in image understanding: LANDSAT classification and ocular fundus images
暂无分享,去创建一个
This work is motivated by the observation that Computer Vision and Image Understanding processes are not very robust. Small changes in exposure parameters or in internal parameters of algorithms used can lead to significantly different results. A combination (fusion) of these results is, under many aspects, profitable. We introduce an extended fusion concept dealing with different sources of information at external (world, scene, image) and internal levels (image description, scene description, world description) and define the process of fusion. Related work in the field is reviewed and connected with our model. Each of our levels requires its own quality measures and information fusion algorithms in order to yield a combination of components from several sources, so that we start investigating fusion at isolated levels. Two application examples from our own work are discussed: remote sensing (improvement of classification results by fusion at the image level), and medical image processing of ocular fundus images (automatic control point selection by fusion at the image description level). Our results with experiments at isolated levels encourage the incorporation of the complete fusion model into a complex image understanding system.