Multimodal Sensor Fusion for Robust Obstacle Detection and Classification in the Maritime RobotX Challenge

This paper describes a novel probabilistic sensor fusion framework aimed at improving obstacle detection accuracy and classification of various targets experienced in the Maritime RobotX Challenge. In both the 2014 and 2016 Maritime RobotX Challenges, it was found that detecting obstacles using LIDAR only and classifying obstacles using vision only can be challenged by environmental conditions, such as glare from the sun or by objects such as the spherical black buoys from the obstacle field that disperses LIDAR rays. In this paper, a new multimodal sensor fusion approach is proposed that combines data streams from perception sensors, such as LIDAR, RADAR, and cameras, to improve the robustness of detection and classification performance over a single sensor method. Using data collected from both the 2014 and 2016 Maritime RobotX Challenges, an evaluation of the perception framework is provided. The proposed detection and classification framework is now being transferred to the queensland university of technology (QUT) autonomous surface vehicle to improve overall mapping accuracy and task execution.

[1]  Volkan Isler,et al.  Navigation around an unknown obstacle for autonomous surface vehicles using a forward-facing sonar , 2015, 2015 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR).

[2]  Paul E. Rybski,et al.  Obstacle Detection and Tracking for the Urban Challenge , 2009, IEEE Transactions on Intelligent Transportation Systems.

[3]  Erik Reinhard,et al.  Color Transfer between Images , 2001, IEEE Computer Graphics and Applications.

[4]  Wolfram Burgard,et al.  Terrain-adaptive obstacle detection , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[5]  Thierry Peynot,et al.  Characterisation of the Delphi Electronically Scanning Radar for robotics applications , 2015, ICRA 2015.

[6]  J. P. Lewis,et al.  Fast Template Matching , 2009 .

[7]  Matej Kristan,et al.  Fast Image-Based Obstacle Detection From Unmanned Surface Vehicles , 2015, IEEE Transactions on Cybernetics.

[8]  William Whittaker,et al.  Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.

[9]  Wolfram Burgard,et al.  OctoMap: an efficient probabilistic 3D mapping framework based on octrees , 2013, Autonomous Robots.

[10]  Hugh F. Durrant-Whyte,et al.  Multisensor Data Fusion , 2016, Springer Handbook of Robotics, 2nd Ed..

[11]  Sebastian Thrun,et al.  Junior: The Stanford entry in the Urban Challenge , 2008, J. Field Robotics.

[12]  Paul Newman,et al.  Real-time probabilistic fusion of sparse 3D LIDAR and dense stereo , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[13]  Andreas Zell,et al.  3D LIDAR- and Camera-Based Terrain Classification Under Different Lighting Conditions , 2012, AMS.

[14]  Thierry Peynot,et al.  The Marulan Data Sets: Multi-sensor Perception in a Natural Environment with Challenging Conditions , 2010, Int. J. Robotics Res..

[15]  Ryan Halterman,et al.  Velodyne HDL-64E lidar for unmanned surface vehicle obstacle detection , 2010, Defense + Commercial Sensing.

[16]  Milan Sonka,et al.  Image Processing, Analysis and Machine Vision , 1993, Springer US.

[17]  Martial Hebert,et al.  Natural terrain classification using three‐dimensional ladar data for ground robot mobility , 2006, J. Field Robotics.

[18]  Silvio Savarese,et al.  Automatic Extrinsic Calibration of Vision and Lidar by Maximizing Mutual Information , 2015, J. Field Robotics.

[19]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  Christian Laugier,et al.  Update Policy of Dense Maps: Efficient Algorithms and Sparse Representation , 2007, FSR.

[21]  Stefan Leutenegger,et al.  SemanticFusion: Dense 3D semantic mapping with convolutional neural networks , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[22]  Peter I. Corke,et al.  Cross-calibration of push-broom 2D LIDARs and cameras in natural scenes , 2013, 2013 IEEE International Conference on Robotics and Automation.

[23]  Wolfram Burgard,et al.  Improving robot navigation in structured outdoor environments by identifying vegetation from laser data , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[24]  Alberto Elfes,et al.  Using occupancy grids for mobile robot perception and navigation , 1989, Computer.

[25]  Thierry Peynot,et al.  Sensor Data Integrity : Multi-Sensor Perception for Unmanned Ground Vehicles , 2009 .

[26]  Uwe D. Hanebeck,et al.  Template matching using fast normalized cross correlation , 2001, SPIE Defense + Commercial Sensing.

[27]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[28]  Jacoby Larson,et al.  Advances in Autonomous Obstacle Avoidance for Unmanned Surface Vehicles , 2007 .