Representing environment through target-guided navigation

Proposes an environmental representation, called a T-Net, a network of the path segments which reflects the structure of the environment. The robot iterates panoramic-sensing to find and estimate the skeletons of local areas, as well as their spatial relationships, through active navigation guided by selected targets. The robot represents the environment by a network of the skeletons which are used as the path segments for planning and navigating to a goal in real-time. Dynamic change often occurs in a real environment and the robot, thereby, detects such changes while navigating to a destination and updates the T-Net with alternative path segments if they are structural changes. Results of experiments using a robot with a real-time omnidirectional vision sensor are given.

[1]  Hiroshi Ishiguro,et al.  T-Net for navigating a vision-guided robot in a real world , 1995, Proceedings of 1995 IEEE International Conference on Robotics and Automation.

[2]  Shigang Li,et al.  Making Cognitive Map of Outdoor Environment , 1993, IJCAI.

[3]  Olivier D. Faugeras,et al.  Building, Registrating, and Fusing Noisy Visual Maps , 1988, Int. J. Robotics Res..

[4]  Masayuki Inaba,et al.  Robot vision system with a correlation chip for real-time tracking, optical flow and depth map generation , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[5]  Edward M. Riseman,et al.  Image-based homing , 1992 .

[6]  Ugo Montanari,et al.  A Method for Obtaining Skeletons Using a Quasi-Euclidean Distance , 1968, J. ACM.

[7]  Shigang Li,et al.  Qualitative representation of outdoor environment along route , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[8]  Alberto Elfes,et al.  Sonar-based real-world mapping and navigation , 1987, IEEE J. Robotics Autom..

[9]  Hiroshi Ishiguro,et al.  Omni-Directional Stereo , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Yasushi Yagi,et al.  Real-time omnidirectional image sensor (COPIS) for vision-guided navigation , 1994, IEEE Trans. Robotics Autom..