The problem under consideration centers on building a three-dimensional description of the unprepared environment of an autonomous mobile robot. In an image sequence, tracking is to be performed after image rectification. This intermediate process minimizes the token relative displacements between two frames and simplifies the tracking phase, because it reduces the disparity between two relative tokens and thus simplifies the matching process. In this paper we present a new hybrid approach to range estimation that combine inertial and visual based technologies, this allows us to calculate the image-space distance between the robotic head and the edge lines of the 3D environment. Two frames from the image sequence obtained from passive target tracking system, moving CCD video camera, will represent a set of data with the output of the inertial tracking system, that report the relative changes of orientations and accelerations between the two frames. By integrating these data in our algorithm the image-space distances of different 3D points was estimated theoretically and experimentally.
[1]
Suya You,et al.
Augmented Reality Tracking in Natural Environments
,
1998
.
[2]
Noboru Ohnishi,et al.
The recovery of object shape and camera motion using a sensing system with a video camera and a gyro sensor
,
1999,
Proceedings of the Seventh IEEE International Conference on Computer Vision.
[3]
Fumiaki Tomita,et al.
A factorization method for multiple perspective views via iterative depth estimation
,
2000
.
[4]
Dana H. Ballard,et al.
Computer Vision
,
1982
.
[5]
Ronald Azuma,et al.
Hybrid inertial and vision tracking for augmented reality registration
,
1999,
Proceedings IEEE Virtual Reality (Cat. No. 99CB36316).
[6]
Thierry Viéville.
A Few Steps Towards 3D Active Vision
,
1997
.