Direct Depth and Color-based Environment Modeling and Mobile Robot Navigation

This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.

[1]  Wai-Kiang Yeap Towards a Computational Theory of Cognitive Maps , 1988, Artif. Intell..

[2]  D.J. Kriegman,et al.  Stereo vision and navigation in buildings for mobile robots , 1989, IEEE Trans. Robotics Autom..

[3]  Claude L. Fennema,et al.  Model-directed mobile robot navigation , 1990, IEEE Trans. Syst. Man Cybern..

[4]  Fumio Miyazaki,et al.  A stable tracking control method for an autonomous mobile robot , 1990, Proceedings., IEEE International Conference on Robotics and Automation.

[5]  Gregory D. Hager,et al.  Real-time vision-based robot localization , 1993, IEEE Trans. Robotics Autom..

[6]  Hobart R. Everett,et al.  Where am I?" sensors and methods for mobile robot positioning , 1996 .

[7]  Sebastian Thrun,et al.  Learning Metric-Topological Maps for Indoor Mobile Robot Navigation , 1998, Artif. Intell..

[8]  Keiichi Yamada,et al.  A vision sensor having an expanded dynamic range for autonomous vehicles , 1998 .

[9]  Gregory Dudek,et al.  A global topological map formed by local metric maps , 1998, Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190).

[10]  Roland Siegwart,et al.  A hybrid approach for robust and precise mobile robot navigation with compact environment modeling , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[11]  Masahiro Tomono,et al.  Mobile robot localization based on an inaccurate map , 2001, Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180).

[12]  Wolfram Burgard,et al.  Robust Monte Carlo localization for mobile robots , 2001, Artif. Intell..

[13]  Avinash C. Kak,et al.  Vision for Mobile Robot Navigation: A Survey , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  E. Spelke,et al.  Human Spatial Representation: Insights from Animals , 2002 .

[15]  James J. Little,et al.  Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks , 2002, Int. J. Robotics Res..

[16]  Jana Kosecka,et al.  Qualitative image based localization in indoors environments , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[17]  Sebastian Thrun,et al.  Robotic mapping: a survey , 2003 .

[18]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[19]  Michael Isard,et al.  CONDENSATION—Conditional Density Propagation for Visual Tracking , 1998, International Journal of Computer Vision.

[20]  Paolo Pirjanian,et al.  The vSLAM Algorithm for Robust Localization and Mapping , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[21]  Hongbin Zha,et al.  Coarse-to-fine vision-based localization by indexing scale-Invariant features , 2006, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).