Vision-based reactive autonomous navigation with obstacle avoidance: Towards a non-invasive and cautious exploration of marine habitat

We present a visual based approach for reactive autonomous navigation of an underwater vehicle. In particular, we are interested in the exploration and continuous monitoring of coral reefs in order to diagnose disease or physical damage. An autonomous underwater vehicle needs to decide in real time the best route while avoiding collisions with fragile marine life and structure. We have opted to use only visual information as input. We have improved the Simple Linear Iterative Cluster algorithm which, together with a simple nearest neighbor classifier, robustly segment and classify objects from water in a fast and efficient way, even in poor visibility conditions. From the resulting classification and the current robot's direction and orientation, the next possible free-collision route can be estimated. This is achieved by grouping together neighboring water superpixels (considered as “regions of interest”). Finally, we use a model-free robust control scheme that allows the robot to autonomously navigate through the free-collision routes obtained in the first step. The experimental results, both in simulations and in practice, show the effectiveness of the proposed navigation system.

[1]  Gregory Dudek,et al.  Autonomous adaptive exploration using realtime online spatiotemporal topic modeling , 2014, Int. J. Robotics Res..

[2]  Andrew Hogue,et al.  AQUA: an aquatic walking robot , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[3]  P. King,et al.  Real-time side scan image generation and registration framework for AUV route following , 2012, 2012 IEEE/OES Autonomous Underwater Vehicles (AUV).

[4]  Ryan M. Eustice,et al.  Real-Time Visual SLAM for Autonomous Underwater Hull Inspection Using Visual Saliency , 2013, IEEE Transactions on Robotics.

[5]  Umar Mohammed,et al.  Superpixel lattices , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Francisco García-Córdova,et al.  Intelligent Navigation for a Solar Powered Unmanned Underwater Vehicle , 2013 .

[7]  Peter I. Corke,et al.  Visual Motion Estimation for an Autonomous Underwater Reef Monitoring Robot , 2005, FSR.

[8]  F. Geovani Rodrı́guez-Telles,et al.  A Fast Floor Segmentation Algorithm for Visual-Based Robot Navigation , 2013, 2013 International Conference on Computer and Robot Vision.

[9]  Jennifer C. Brooks,et al.  Effects of luminance, blur, and tunnel vision on postural stability , 2010 .

[10]  Peter I. Corke,et al.  Experiments with Underwater Robot Localization and Tracking , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[11]  D. Alfred Owens,et al.  Twilight vision and road safety: Seeing more than we notice but less than we think. , 2003 .

[12]  Andrew Hogue,et al.  AQUA: An Amphibious Autonomous Robot , 2007, Computer.

[13]  Jitendra Malik,et al.  Normalized cuts and image segmentation , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[14]  Luz Abril Torres-Méndez,et al.  Model-Free Robust Control for Fluid Disturbed Underwater Vehicles , 2012, ICIRA.

[15]  智一 吉田,et al.  Efficient Graph-Based Image Segmentationを用いた圃場図自動作成手法の検討 , 2014 .

[16]  Gregory Dudek,et al.  Enabling autonomous capabilities in underwater robotics , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Sven J. Dickinson,et al.  TurboPixels: Fast Superpixels Using Geometric Flows , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Pascal Fua,et al.  SLIC Superpixels Compared to State-of-the-Art Superpixel Methods , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.