Towards global consistent pose estimation from images

We propose a method for making globally consistent pose estimates using vision. Lu and Milios (1997) described an approach where the links between poses were estimated using range scanner data. When using point correspondences the length of the links cannot be estimated and the approach of Lu and Milios has to be modified. First, we use the nonlinear orientation part of the pose differences to obtain a reference trajectory. Then the reference trajectory is used to scale and orientate the linear spatial part of the pose differences, such that the positions can be estimated as well. We show the results of an experiment of navigating a robot equipped with an omnidirectional camera on the corridor.