A combination of monocular CCD camera and inertial-sensor for range estimation

The problem under consideration centers on building a three-dimensional description of the unprepared environment of an autonomous mobile robot. In an image sequence, tracking is to be performed after image rectification. This intermediate process minimizes the token relative displacements between two frames and simplifies the tracking phase, because it reduces the disparity between two relative tokens and thus simplifies the matching process. In this paper we present a new hybrid approach to range estimation that combine inertial and visual based technologies, this allows us to calculate the image-space distance between the robotic head and the edge lines of the 3D environment. Two frames from the image sequence obtained from passive target tracking system, moving CCD video camera, will represent a set of data with the output of the inertial tracking system, that report the relative changes of orientations and accelerations between the two frames. By integrating these data in our algorithm the image-space distances of different 3D points was estimated theoretically and experimentally.