Image-based path planning for outdoor mobile robots

Mobile robots operating in natural terrain need some sort of long-range perception in order to navigate efficiently. Whereas LADAR is a commonly used sensor on such systems, providing range data out to 25 m and beyond, we have instead focused on what information can be extracted from vision. Our robot has only two stereo camera pairs for terrain sensing; they provide reliable stereo data up to 5 m away, but this is not enough to prevent myopic behavior. To overcome this problem, we have developed a novel approach to navigation using monocular imagery by planning a path in the image space. We take a monocular image and apply a learned color-to-cost mapping to transform the raw image into a cost image. Then, after a pseudo-configuration-space transform, we search for a pixel-to-pixel path from a point in front of the robot to the projected goal point in the cost image. Our implementation has been shown to react to obstacles at a range of 93 m, far beyond the range of our stereo perception. We describe the details of our method and results from testing under the DARPA Learning Applied to Ground Robots (LAGR) program and discuss the characteristics and trade-offs of our approach. © 2009 Wiley Periodicals, Inc.

[1]  Yann LeCun,et al.  Adaptive long range vision in unstructured terrain , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Larry D. Jackel,et al.  The DARPA LAGR program: Goals, challenges, methodology, and phase I results , 2006, J. Field Robotics.

[3]  David N. Lee,et al.  A Theory of Visual Control of Braking Based on Information about Time-to-Collision , 1976, Perception.

[4]  Yeung Sam Hung,et al.  Global Path-Planning for Constrained and Optimal Visual Servoing , 2007, IEEE Transactions on Robotics.

[5]  Nicolas Mansard,et al.  Visual Servoing Sequencing Able to Avoid Obstacles , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[6]  Robert C. Bolles,et al.  Outdoor Mapping and Navigation Using Stereo Vision , 2006, ISER.

[7]  Gregory Z. Grudic,et al.  Local path planning in image space for autonomous robot navigation in unstructured environments , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Roberto Manduchi,et al.  Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation , 2005, Auton. Robots.

[9]  James P. Ostrowski,et al.  Visual motion planning for mobile robots , 2002, IEEE Trans. Robotics Autom..

[10]  Yiannis Aloimonos,et al.  Is visual reconstruction necessary? obstacle avoidance without passive ranging , 1992, J. Field Robotics.

[11]  Larry H. Matthies,et al.  Towards learned traversability for robot navigation: From underfoot to the far field , 2006, J. Field Robotics.

[12]  Martin Herman,et al.  Real-time single-workstation obstacle avoidance using only wide-field flow divergence , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[13]  Illah R. Nourbakhsh,et al.  Mobile robot obstacle avoidance via depth from focus , 1997, Robotics Auton. Syst..

[14]  Peter Lawrence,et al.  An Investigation of Methods for Determining Depth from Focus , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Ben Taskar,et al.  Online, self-supervised terrain classification via discriminatively trained submodular Markov random fields , 2008, 2008 IEEE International Conference on Robotics and Automation.

[16]  M. Turk,et al.  A simple, real-time range camera , 1989, Proceedings CVPR '89: IEEE Computer Society Conference on Computer Vision and Pattern Recognition.