In the human perception to the outside real world, many organs play important roles. They capture information from the real world and send them to the brain for interpreting and understanding the situation. This is reflected also in the current construction of intelligent robots, in which many sensors for sense of vision, sense of hearing, sense of taste, sense of smell, sense of touch, sense of pain, sense of heat, sense of force, sense of slide, sense of approach, etc. (Luo & Jiang, 2002), are built and used. All these sensors provide different profile information of the real world in same environment. To use suitable techniques for assorting with various sensors and combining their obtained information, the theories and methods of information fusion with multiple-sensors are required. Information fusion with multiple-sensors is a basic ability of human beings, and is also a must for contemporary machinery. In many cases, the information provided by a single sensor could be incomplete, unaccurate, vague, with many uncertainty. Sometimes, information obtained by different sensors can even be contradictory. Human beings have the ability to suitably combine the information obtained by different organs and then make estimation and decision for environment and events. Using computer to perform multi-sensor information fusion can be considered as a simulation of the function of human brain for treating complex problems. Information fusion with multiple-sensors consists of operating on the acquired data come from various sensors, using available technology to treat the contended information and obtaining more comprehensive, accurate, and robust results than that obtained from single sensor. Fusion can be defined as the process of combined treating of data acquired from multiple sensors, as well as assorting, optimizing and conforming of these data to increase the ability of extracting information and improving the decision capability. Fusion can extend the information coverage for space and time, reducing the fuzziness, increasing the reliability of making decision, and the robustness of systems. Image fusion with multiple-sensors is a particular type of information fusion with multiple-sensors, which takes images as operating objects. As an image is worth a thousand words, and around 75% of the information obtained by human beings from outside world was via vision, image fusion with multiple-sensors attracted a lot of attention recently in information society. This article will introduce the principle, main step of image fusion, discuss some typical fusion methods and their combinations, and point out several potential future development directions.
[1]
G. J. Brakenhoff,et al.
Confocal scanning light microscopy with high aperture immersion lenses
,
1979
.
[2]
Gemma Piella,et al.
A general framework for multiresolution image fusion: from pixels to regions
,
2003,
Inf. Fusion.
[3]
R. Li,et al.
Level Selection for Multiscale Fusion of Out-of-Focus Image
,
2005,
IEEE Signal Process. Lett..
[4]
Gonzalo Pajares,et al.
A wavelet-based image fusion tutorial
,
2004,
Pattern Recognit..
[5]
Yu-Jin Zhang,et al.
Multimodal biometrics fusion using Correlation Filter Bank
,
2008,
2008 19th International Conference on Pattern Recognition.
[6]
Walter G. Kropatsch,et al.
Digital Image Analysis
,
2001,
Springer New York.
[7]
Christine Pohl,et al.
Multisensor image fusion in remote sensing: concepts, methods and applications
,
1998
.