Bearings-only path following with a vision-based potential field

In this paper, we present a vision-based path following algorithm for a non-holonomic wheeled platform. The algorithm is based on choosing control actions that minimise the value of a potential field cost function calculated directly from the image plane. The algorithm is suitable for teach and replay or leader follower implementations where the desired path is represented as a collection of images. The algorithm computes the cost function based on the relative bearings of features matched between the current and previously observed images. A forward prediction step is then used to determine the control action that will lead to the greatest reduction in the cost function. The algorithm is demonstrated on a 400 m path in an outdoor environment where the accuracy is shown to be similar to that of differential GPS.

[1]  Darius Burschka,et al.  Vision-based control of mobile robots , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[2]  Gonzalo López-Nicolás,et al.  Omnidirectional visual control of mobile robots based on the 1D trifocal tensor , 2010, Robotics Auton. Syst..

[3]  S. R. Jammalamadaka,et al.  Directional Statistics, I , 2011 .

[4]  J. Gaspar,et al.  Omni-directional vision for robot navigation , 2000, Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704).

[5]  François Chaumette,et al.  Visual Servoing and Visual Tracking , 2008, Springer Handbook of Robotics.

[6]  Sinisa Segvic,et al.  Outdoor visual path following experiments , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  François Chaumette,et al.  Image-based robot navigation from an image memory , 2007, Robotics Auton. Syst..

[8]  Giuseppe Oriolo,et al.  A position-based visual servoing scheme for following paths with nonholonomic mobile robots , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Masayuki Inaba,et al.  Visual navigation using omnidirectional view sequence , 1999, Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289).

[10]  Vijay Kumar,et al.  Biologically inspired bearing-only navigation and tracking , 2007, 2007 46th IEEE Conference on Decision and Control.

[11]  O. Khatib,et al.  Real-Time Obstacle Avoidance for Manipulators and Mobile Robots , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[12]  Nick Barnes,et al.  Robust visual homing with landmark angles , 2009, Robotics: Science and Systems.

[13]  David Nistér,et al.  Scalable Recognition with a Vocabulary Tree , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[14]  Michael Himmelsbach,et al.  Driving with Tentacles - Integral Structures for Sensing and Motion , 2008, The DARPA Urban Challenge.

[15]  Zhichao Chen,et al.  Qualitative Vision-Based Path Following , 2009, IEEE Transactions on Robotics.

[16]  Gonzalo López-Nicolás,et al.  Omnidirectional visual homing using the 1D trifocal tensor , 2010, 2010 IEEE International Conference on Robotics and Automation.

[17]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[18]  Francisco Bonin-Font,et al.  Visual Navigation for Mobile Robots: A Survey , 2008, J. Intell. Robotic Syst..

[19]  Andrea Vedaldi,et al.  Vlfeat: an open and portable library of computer vision algorithms , 2010, ACM Multimedia.

[20]  Roland Siegwart,et al.  Vision-based path following using the 1D trifocal tensor , 2013, 2013 IEEE International Conference on Robotics and Automation.