A Visual Navigation Strategy Based on Inverse Perspective Transformation

Vision-Based Navigation Techniques can be roughly divided in map-based and mapless systems. Map-based systems plan routes and their performance and are labeled as deliverative, while mapless systems analyze on-line the environment to determine the route to follow (Bonin et al., 2008). Some reactive vision-based systems include the implementation of local occupancy maps that show the presence of obstacles in the vicinity of the robot and try to perform a symbolic view of the surrounding world. The construction of such maps entails the computation of range and angle of obstacles in a particularly accurate manner. These maps are updated on-line and used to navigate safely (Badal et al., 1994) (Goldberg et al., 2002). Many of the local map-based and visual sonar reactive navigation solutions are vulnerable to the presence of shadows or inter-reflections and they are also vulnerable to textured floors, since they are mostly based on edge computation or on texture segmentation. Solutions based on homography computation fail in scenarios that generate scenes withmultiple planes. Some road line trackers based on Inverse Perspective Transformation (IPT) need to previously find lines in the image that converge to the vanishing point. Some other IPT-based solutions project the whole image onto the ground, increasing the computational cost. This chapter presents a new navigation strategy comprising obstacle detection and avoidance. Unlike previous approaches, the one presented in this chapter avoids back-projecting the whole image, presents a certain robustness to scenarios with textured floors or interreflections, overcomes scenes with multiple different planes and combines a quantitative process with a set of qualitative rules to converge in a robust technique to safely explore unknown environments. The method has been inspired on the visual sonar-based reactive navigation algorithms and implements a new version of the Vector Field Histogram method (Borenstein & Koren, 1991) but here adapted for vision-based systems. The complete algorithm runs in five steps: 1) first, image main features are detected, tracked across consecutive frames, and classified as obstacle or ground using a new algorithm based on IPT; 2) the edge map of the processed frames is computed, and edges comprising obstacle points are discriminated from the rest, emphasizing the obstacle boundaries; 3) range and angle of obstacles located inside a Region of Interest (ROI), centered on the robot and with a fixed radius, are estimated computing the orientation and distance of those obstacle points that are in contact with the floor; 4) a qualitative occupancy map is performed with the data computed in the previous step; and 5) finally, the algorithm computes a vector which steers the robot towards world areas free of obstacles. 5

[1]  Bruce A. Draper,et al.  A practical obstacle detection and avoidance system , 1994, Proceedings of 1994 IEEE Workshop on Applications of Computer Vision.

[2]  Yoram Koren,et al.  The vector field histogram-fast obstacle avoidance for mobile robots , 1991, IEEE Trans. Robotics Autom..

[3]  J. Little,et al.  Inverse perspective mapping simplifies optical flow computation and obstacle detection , 2004, Biological Cybernetics.

[4]  Larry Matthies,et al.  Stereo vision and rover navigation software for planetary exploration , 2002, Proceedings, IEEE Aerospace Conference.

[5]  Dean A. Pomerleau,et al.  Overtaking vehicle detection using implicit optical flow , 1997, Proceedings of Conference on Intelligent Transportation Systems.

[6]  Sean Dougherty,et al.  Edge Detector Evaluation Using Empirical ROC Curves , 2001, Comput. Vis. Image Underst..

[7]  Manuela Veloso,et al.  FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL , 2005 .

[8]  T. Rabie,et al.  Active-Vision-based Traffic Surveillance and Control , 2001 .

[9]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[10]  Jagath Samarabandu,et al.  Robust and Efficient Feature Tracking for Indoor Navigation , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[11]  Nicolas Simond,et al.  Obstacle detection from IPM and super-homography , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[13]  Massimo Bertozzi,et al.  GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection , 1998, IEEE Trans. Image Process..

[14]  Cordelia Schmid,et al.  A Performance Evaluation of Local Descriptors , 2005, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[16]  James J. Little,et al.  Vision-based global localization and mapping for mobile robots , 2005, IEEE Transactions on Robotics.

[17]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Manuela M. Veloso,et al.  Visual sonar: fast obstacle avoidance using monocular vision , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[19]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  J. Hanley,et al.  The meaning and use of the area under a receiver operating characteristic (ROC) curve. , 1982, Radiology.

[21]  Sumetee kesorn Visual Navigation for Mobile Robots: a Survey , 2012 .

[22]  Yuan Shu,et al.  Vision based lane detection in autonomous vehicle , 2004, Fifth World Congress on Intelligent Control and Automation (IEEE Cat. No.04EX788).

[23]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[24]  Martin C. Martin,et al.  Evolving visual sonar: Depth from monocular images , 2006, Pattern Recognit. Lett..

[25]  Parvaneh Saeedi,et al.  Vision-based 3-D trajectory tracking for unknown environments , 2006, IEEE Transactions on Robotics.

[26]  Ian Horswill Collision Avoidance by Segmentation , 1995 .

[27]  Baoxin Li,et al.  Homography-based ground detection for a mobile robot platform using a single camera , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[28]  Anton Kummert,et al.  Vision-based pedestrian detection -reliable pedestrian candidate detection by combining IPM and a 1D profile , 2007, 2007 IEEE Intelligent Transportation Systems Conference.

[29]  Se-Young Oh,et al.  Visual sonar based localization using particle attraction and scattering , 2005, IEEE International Conference Mechatronics and Automation, 2005.