Autonomous Robot Navigation-A study using Optical Flow and log-polar image representation

This paper describes a methodology for autonomous robot navigation, based on log-polar transform of images and optical flow. The navigation task for robots involves the detection of obstacles in the traversable path. This is considered as basic capability for mobility that includes the measurement of the height of objects to classify them as to be avoided or to be ignored. The vanishing point in the image corresponds to the Focus of Expansion (FOE) since we are assuming that the mobile robot moves with a translational velocity parallel to the ground plane. The FOE is determined from the optical flow field using a phase-based approach. From the FOE in the images and assuming the robot moves in the levelled ground, the planar homography H, is recovered and any object on the floor can be detected. In this paper we prove that it is not necessary to recover the homography H explicitly but sufficient to evaluate the displacement of tracked points along epipolar lines in the image. This article describes how these epipolar lines are computed and their relation with the FOE, when the robot moves with translational velocity.

[1]  Marc M. Van Hulle,et al.  A phase-based approach to the estimation of the optical flow field using spatial filtering , 2002, IEEE Trans. Neural Networks.

[2]  Zezhi Chen,et al.  Affine height landscapes for monocular mobile robot obstacle avoidance , 2003 .

[3]  Rodney A. Brooks,et al.  Visually-guided obstacle avoidance in unstructured environments , 1997, Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97.

[4]  Helder Araújo,et al.  Optical Normal Flow Estimation on Log-polar Images. A Solution for Real-Time Binocular Vision , 1997, Real Time Imaging.

[5]  Giulio Sandini,et al.  An anthropomorphic retina-like structure for scene analysis , 1980 .

[6]  David J. Fleet,et al.  Computation of component image velocity from local phase information , 1990, International Journal of Computer Vision.

[7]  Giulio Sandini,et al.  "Form-invariant" topological mapping strategy for 2D shape recognition , 1985, Comput. Vis. Graph. Image Process..

[8]  Carl F. R. Weiman,et al.  Exponential Sensor Array Geometry And Simulation , 1988, Defense, Security, and Sensing.

[9]  Jeffrey S. Norris,et al.  Vision-based obstacle detection and path planning for planetary rovers , 1999, Defense, Security, and Sensing.

[10]  Eric L. Schwartz,et al.  Anatomical and physiological correlates of visual computation from striate to infero-temporal cortex , 1984, IEEE Transactions on Systems, Man, and Cybernetics.

[11]  Nick Pears,et al.  Ground plane segmentation from multiple visual cues , 2002, Other Conferences.

[12]  Cordelia Schmid,et al.  Scale & Affine Invariant Interest Point Detectors , 2004, International Journal of Computer Vision.

[13]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .