Low-Level Active Visual Navigation: Increasing Robustness of Vision-Based Localization Using Potential Fields

This letter proposes a low-level visual navigation algorithm to improve visual localization of a mobile robot. The algorithm, based on artificial potential fields, associates each feature in the current image frame with an attractive or neutral potential energy, with the objective of generating a control action that drives the vehicle towards the goal, while still favoring feature rich areas within a local scope, thus improving the localization performance. One key property of the proposed method is that it does not rely on mapping, and therefore it is a lightweight solution that can be deployed on miniaturized aerial robots, in which memory and computational power are major constraints. Simulations and real experimental results using a mini quadrotor equipped with a downward looking camera demonstrate that the proposed method can effectively drive the vehicle to a designated goal through a path that prevents localization failure.

[1]  Vijay Kumar,et al.  Visual inertial odometry for quadrotors on SE(3) , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[2]  Friedrich Fraundorfer,et al.  Visual Odometry Part I: The First 30 Years and Fundamentals , 2022 .

[3]  Jens Wawerla,et al.  Feature-rich path planning for robust navigation of MAVs with Mono-SLAM , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[5]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[6]  S. Sukkarieh,et al.  Observability analysis and active control for airborne SLAM , 2008, IEEE Transactions on Aerospace and Electronic Systems.

[7]  Roland Siegwart,et al.  Introduction to Autonomous Mobile Robots , 2004 .

[8]  Paolo Valigi,et al.  Perception-aware Path Planning , 2016, ArXiv.

[9]  Alex Zelinsky,et al.  Learning OpenCV---Computer Vision with the OpenCV Library (Bradski, G.R. et al.; 2008)[On the Shelf] , 2009, IEEE Robotics & Automation Magazine.

[10]  W MurrayDavid,et al.  Simultaneous Localization and Map-Building Using Active Vision , 2002 .

[11]  Roland Siegwart,et al.  Motion‐ and Uncertainty‐aware Path Planning for Micro Aerial Vehicles , 2014, J. Field Robotics.

[12]  David W. Murray,et al.  Simultaneous Localization and Map-Building Using Active Vision , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Roland Siegwart,et al.  Monocular Vision for Long‐term Micro Aerial Vehicle State Estimation: A Compendium , 2013, J. Field Robotics.

[14]  W. Marsden I and J , 2012 .

[15]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[16]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[17]  Daniel E. Koditschek,et al.  Exact robot navigation using artificial potential functions , 1992, IEEE Trans. Robotics Autom..

[18]  Antonio Pedro Aguiar,et al.  Feature Based Potential Field for Low-Level Active Visual Navigation , 2017, ROBOT.

[19]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[20]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[21]  Guido C. H. E. de Croon,et al.  Autonomous flight of a 20-gram Flapping Wing MAV with a 4-gram onboard stereo vision system , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[22]  Juan Andrade-Cetto,et al.  Potential information fields for mobile robot exploration , 2015, Robotics Auton. Syst..

[23]  Mark T. Bolas,et al.  Mixed reality for robotics , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[24]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[25]  Mariana Luderitz Kolberg,et al.  Ouroboros: Using potential field in unexplored regions to close loops , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[26]  Andrew Zisserman,et al.  Multiple View Geometry , 1999 .

[27]  Daniel Cremers,et al.  Accurate Figure Flying with a Quadrocopter Using Onboard Visual and Inertial Sensing , 2012 .

[28]  G. Klein,et al.  Parallel Tracking and Mapping for Small AR Workspaces , 2007, 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.

[29]  Horst Bischof,et al.  Active monocular localization: Towards autonomous monocular exploration for multirotor MAVs , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[30]  Daniel Cremers,et al.  Direct Sparse Odometry , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[31]  Teresa A. Vidal-Calleja,et al.  Action Selection for Single-Camera SLAM , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[32]  Paolo Valigi,et al.  Exploiting Photometric Information for Planning Under Uncertainty , 2015, ISRR.

[33]  Oussama Khatib,et al.  The Potential Field Approach And Operational Space Formulation In Robot Control , 1986 .

[34]  R. Stephenson A and V , 1962, The British journal of ophthalmology.

[35]  Davide Scaramuzza,et al.  SVO: Fast semi-direct monocular visual odometry , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).