Motion stereo for navigation of autonomous vehicles in man-made environments

Abstract In order to navigate, vehicles need to deduce their movements in given environments. We describe a method for wide-angle motion stereo to aid such navigations. Our approach uses vertices of objects observed in a scene as features to be matched in the two images. The matching is done in the 3-d world with the guidance of a base line which can be seen in both images. The candidate vertices for matching are obtained from analyzing (1) the edge maps of input images using a junction type table and (2) the range information of the segments which form the vertices. Range information is obtained by inverse perspective transformation. Some experimental results are shown.

[1]  Marsha Jo Hannah,et al.  Bootstrap Stereo , 1980, AAAI.

[2]  William B. Thompson,et al.  Disparity Analysis of Images , 1980, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  William K. Pratt System Architecture Of Vicom Digital Image Processor , 1982, Optics & Photonics.

[4]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[5]  Ikuo Fukui TV image processing to determine the position of a robot vehicle , 1981, Pattern Recognit..

[6]  Hans P. Moravec Rover Visual Obstacle Avoidance , 1981, IJCAI.

[7]  Ramakant Nevatia,et al.  Depth measurement by motion stereo , 1976 .