During the late 80’s, several Japanese industrial and academic groups began their pioneering work on utilizing omni-directional images, mainly in conjunction with intelligent mobile robotics research. At Fujitsu, the Morita and Uchiyama group obtained a sequence of omni-directional images by using a fish-eye camera mounted on a mobile platform. They then analyzed those images on a spherical surface, and reconstructed the three-dimensional information using the Hough transform on the spherical surface (Morita et al., 1989). Yagi and Kawato’s group at Mitsubishi Electric proposed an imaging system for acquiring omni-directional images; their system had a circular corn mirror located along the optical axis of a camera to enable real time navigation of a mobile robot (Yagi and Kawato, 1990). Tsuji’s group at Osaka University constructed panoramic images from a camera mounted on a mobile platform (Zheng and Tsuji, 1990a, b; Ishiguro et al., 1990). Their main applications were robot navigation and localization based on an obtained panoramic image. During the fifteen years since then, the research activities on omni-directional images blossomed in Japan. They range from new sensor design, through basic computer vision research on photometric and geometric characteristics of omni-directional images, to noble display devices for omni-directional images, and to new applications of omnidirectional images for mobile robot navigation, virtual reality, and surveillance. This special issue presents leading edge research activities in Japan, and contains four representative papers from each of the following areas: sensing device design, basic omni-vision theory, display techniques and applications. Sensor design using a circular-corn or hyperbolic mirror is very popular. Yagi and Yachida of Osaka University, one of the pioneering groups on utilizing those mirrors for omni-directional sensors, survey their long-time efforts on designing several such sensors for environment recognition by a mobile robot in real-time navigation. Omni-directional sensors have wide fields of view. When many omni-sensors are available in the same environment, effective localization constraints among them can be defined since each sensor can observe other sensors in its field of view. Ishiguro, Sogo, and Barth explore such useful constraints among omni-directional sensors for real time localization of omni-directional cameras. Omni-directional images with wide fields of view also have promising applications in the virtual reality area. Traditional screens or head mounted displays, due to their narrow fields of view, cannot provide enough virtual feeling to the users when displaying panorama images. Iwata of Tsukuba University proposed a spherical screen, referred to as Ensphered Vision, to display 360 panoramic images. Another promising application of omni-directional images is to construct a virtual model from a sequence of such images. Omni-directional images, having all possible lines of sight from viewpoints, provide effective image acquisition and require less sampling than those using ordinal images. This characteristic is particularly useful when virtualizing a very large area such as the entire city of Tokyo. Ikeuchi’s group at the University of Tokyo has been working on modeling urban scenes from sequences of omni-directional images. Their paper overviews several such techniques, e.g., extracting 3D information from a sequence of omni-directional images and generating arbitrary views based on an image-based rendering method that utilizes the characteristics of omni-directional images having all possible lines of sight.
[1]
Yasushi Yagi,et al.
Panorama scene analysis with conic projection
,
1990,
EEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications.
[2]
Saburo Tsuji,et al.
Panoramic representation of scenes for route understanding
,
1990,
[1990] Proceedings. 10th International Conference on Pattern Recognition.
[3]
Masashi Yamamoto,et al.
Analysis of omni-directional views at different location
,
1990,
EEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications.
[4]
Saburo Tsuji,et al.
From anorthoscope perception to dynamic vision
,
1990,
Proceedings., IEEE International Conference on Robotics and Automation.
[5]
Takashi Uchiyama,et al.
Measurement in three dimensions by motion stereo and spherical mapping
,
1989,
Proceedings CVPR '89: IEEE Computer Society Conference on Computer Vision and Pattern Recognition.