A new approach to vision-based unsupervised learning of unexplored indoor environment for autonomous land vehicle navigation ☆

Abstract A vision-based approach to unsupervised learning of the indoor environment for autonomous land vehicle (ALV) navigation is proposed. The ALV may, without human's involvement, self-navigate systematically in an unexplored closed environment, collect the information of the environment features, and then build a top-view map of the environment for later planned navigation or other applications. The learning system consists of three subsystems: a feature location subsystem, a model management subsystem, and an environment exploration subsystem. The feature location subsystem processes input images, and calculates the locations of the local features and the ALV by model matching techniques. To facilitate feature collection, two laser markers are mounted on the vehicle which project laser light on the corridor walls to form easily detectable line and corner features. The model management subsystem attaches the local model into a global one by merging matched corner pairs as well as line segment pairs. The environment exploration subsystem guides the ALV to explore the entire navigation environment by using the information of the learned model and the current ALV location. The guidance scheme is based on the use of a pushdown transducer derived from automata theory. A prototype learning system was implemented on a real vehicle, and simulations and experimental results in real environments show the feasibility of the proposed approach.

[1]  Jake K. Aggarwal,et al.  Extraction and interpretation of semantically significant line segments for a mobile robot , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[2]  Fawzi Nashashibi,et al.  3-D incremental modeling and robot localization in a structured environment using a laser range finder , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.

[3]  M. Dorigo Introduction to the Special Issue on Learning Autonomous Robots , 1996 .

[4]  Guan-Yu Chen,et al.  An incremental-learning-by-navigation approach to vision-based autonomous land vehicle guidance in indoor environments using vertical line information and multiweighted generalized Hough transform technique , 1998, IEEE Trans. Syst. Man Cybern. Part B.

[5]  Randall D. Beer,et al.  Spatial learning for navigation in dynamic environments , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[6]  J.K. Aggarwal,et al.  Generation of architectural CAD models using a mobile robot , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[7]  Andreas Kurz Constructing maps for mobile robot navigation based on ultrasonic range data , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[8]  James D. Foley,et al.  Fundamentals of interactive computer graphics , 1982 .

[9]  Jeffrey D. Ullman,et al.  Introduction to Automata Theory, Languages and Computation , 1979 .

[10]  Dana H. Ballard,et al.  Generalizing the Hough transform to detect arbitrary shapes , 1981, Pattern Recognit..

[11]  Leslie Pack Kaelbling,et al.  Inferring finite automata with stochastic output functions and an application to map learning , 1992, 26th Annual Symposium on Foundations of Computer Science (sfcs 1985).

[12]  Linda G. Shapiro,et al.  Computer and Robot Vision , 1991 .

[13]  Liu Qiao,et al.  Learning algorithm of environmental recognition in driving vehicle , 1995, IEEE Trans. Syst. Man Cybern..

[14]  Fawzi Nashashibi,et al.  Indoor scene terrain modeling using multiple range images for autonomous mobile robots , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[15]  Hiroshi Ishiguro,et al.  A strategy for acquiring an environmental model with panoramic sensing by a mobile robot , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[16]  Jean-Arcady Meyer,et al.  Learning reactive and planning rules in a motivationally autonomous animat , 1996, IEEE Trans. Syst. Man Cybern. Part B.