Localization by Co-observing Robots

This paper proposes three methods to improve the accuracy of the estimation of robot's self-state by observing other robots mutually. Many methods have been proposed to enable the autonomous robots estimate their states, such as the methods of SLAM (simultaneous localization and mapping). But those methods usually need high precision sensors (e.g. laser range finder) or high computing power. Robots that use low precision sensor and actuator can cover their low performance by using mutual observation of other robots. In this paper we propose to combine three methods for accurate robot's estimation of their states by using mutual observation of other robots. Each of three methods has advantages and drawbacks. We propose to switch the methods depending on situations.

[1]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[2]  Paolo Pirjanian,et al.  The vSLAM Algorithm for Robust Localization and Mapping , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[3]  Ryo Nitta,et al.  A Flying Robot That Interacts with Humans , 2005 .

[4]  Hiroshi Ishiguro,et al.  Acquisition and Propagation of Spatial Constraints Based on Qualitative Information , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Ryo Kurazume,et al.  Cooperative positioning with multiple robots , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[6]  Greg Welch,et al.  Welch & Bishop , An Introduction to the Kalman Filter 2 1 The Discrete Kalman Filter In 1960 , 1994 .

[7]  Koichi Hori,et al.  Qualitative map learning based on covisibility of objects , 2003, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[8]  Jeffrey K. Uhlmann,et al.  New extension of the Kalman filter to nonlinear systems , 1997, Defense, Security, and Sensing.