Image-based path planning for outdoor mobile robots

Traditionally, path planning for field robotic systems is performed in Cartesian space: sensor readings are transformed into terrain costs in a (Cartesian) costmap, and a path to the goal is planned in that map. In this paper, we propose a new approach: planning a path for the robot in the image-space of an on-board camera. We apply a learned color- to-cost mapping to transform a raw image into a cost-image, which then undergoes a pseudo-configuration-space transform. We search in the resulting cost-image for a path to the projected goal point in the image. One benefit of our approach is the ability to react to obstacles at ranges well beyond our 3D sensor range - independent testing has confirmed our system has effectively reacted to obstacles at a range of 93 m while our stereo sensor provides reliable data only up to 5 m away. We describe the details of our technique and the results from testing under the DARPA LAGR and UPI programs.

[1]  Michael Happold,et al.  Enhancing Supervised Terrain Classification with Predictive Unsupervised Learning , 2006, Robotics: Science and Systems.

[2]  Michael Happold,et al.  A Bayesian approach to imitation learning for robot navigation , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  Larry D. Jackel,et al.  The DARPA LAGR program: Goals, challenges, methodology, and phase I results , 2006, J. Field Robotics.

[4]  James P. Ostrowski,et al.  Visual motion planning for mobile robots , 2002, IEEE Trans. Robotics Autom..

[5]  Ehud Rivlin,et al.  Image-based robot navigation in unknown indoor environments , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[6]  A. Howard,et al.  Detecting Pedestrians with Stereo Vision: Safe Operation of Autonomous Ground Vehicles in Dynamic Environments , 2007 .