Fast road classification and orientation estimation using omni-view images and neural networks

This paper presents the results of integrating omnidirectional view image analysis and a set of adaptive backpropagation networks to understand the outdoor road scene by a mobile robot. Both the road orientations used for robot heading and the road categories used for robot localization are determined by the integrated system, the road understanding neural networks (RUNN). Classification is performed before orientation estimation so that the system can deal with road images with different types effectively and efficiently. An omni-view image (OVI) sensor captures images with 360 degree view around the robot in real-time. The rotation-invariant image features are extracted by a series of image transformations, and serve as the inputs of a road classification network (RCN). Each road category has its own road orientation network (RON), and the classification result (the road category) activates the corresponding RON to estimate the road orientation of the input image. Several design issues, including the network model, the selection of input data, the number of the hidden units, and learning problems are studied. The internal representations of the networks are carefully analyzed. Experimental results with real scene images show that the method is fast and robust.

[1]  D. J. Myers,et al.  Neural Networks for Vision, Speech, and Natural Language , 1992 .

[2]  Guangyou Xu,et al.  Combining rotation-invariance images and neural networks for road scene understanding , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[3]  Dean A. Pomerleau,et al.  Vision guided lane transition , 1995, Proceedings of the Intelligent Vehicles '95. Symposium.

[4]  Edward M. Riseman,et al.  Image-based homing , 1991, IEEE Control Systems.

[5]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[6]  Dean A. Pomerleau,et al.  Neural Network Based Autonomous Navigation , 1990 .

[7]  Yasushi Yagi,et al.  Real-time omnidirectional image sensor (COPIS) for vision-guided navigation , 1994, IEEE Trans. Robotics Autom..

[8]  S. Grossberg Some Networks that can Learn, Remember, and Reproduce any Number of Complicated Space-time , 1970 .

[9]  S. Grossberg Some Networks That Can Learn, Remember, and Reproduce any Number of Complicated Space-Time Patterns, I , 1969 .

[10]  Huan Liu,et al.  Symbolic Representation of Neural Networks , 1996, Computer.

[11]  Zhigang Zhu Environment modeling for visual navigation , 1997 .

[12]  Anil K. Jain,et al.  Artificial Neural Networks: A Tutorial , 1996, Computer.

[13]  Ernest L. Hall,et al.  Three-dimensional line following using omnidirectional vision , 1994, Other Conferences.

[14]  Gérard G. Medioni,et al.  Map-based localization using the panoramic horizon , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[15]  Mikio Takagi,et al.  Internal representation of a neural network that detects local motion , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).

[16]  Bernard Widrow,et al.  The basic ideas in neural networks , 1994, CACM.

[17]  Rama Chellappa,et al.  A network for motion perception , 1990, 1990 IJCNN International Joint Conference on Neural Networks.