Environment-aware sensor fusion for obstacle detection

Reliably detecting obstacles and identifying traversable areas is a key challenge in mobile robotics. For redundancy, information from multiple sensors is often fused. In this work we discuss how prior knowledge of the environment can improve the quality of sensor fusion, thereby increasing the performance of an obstacle detection module. We define a methodology to quantify the performance of obstacle detection sensors and algorithms. This information is used for environment-aware sensor fusion, where the fusion parameters are dependent on the past performance of each sensor in different parts of an operation site. The method is suitable for vehicles that operate in a known area, as is the case in many practical scenarios (warehouses, factories, mines, etc). The system is “trained” by manually driving the robot through a suitable trajectory along the operational areas of a site. The performance of a sensor configuration is then measured based on the similarity between the manually-driven trajectory and the trajectory that the path planner generates after detecting obstacles. Experiments are performed on an autonomous ground robot equipped with 2D laser sensors and a monocular camera with road detection capabilities. The results show an improvement in obstacle detection performance in comparison with a “naive” sensor fusion, illustrating the applicability of the method.

[1]  Antonio M. López,et al.  Road Detection Based on Illuminant Invariance , 2011, IEEE Transactions on Intelligent Transportation Systems.

[2]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[3]  Sebastian Thrun,et al.  Particle Filters in Robotics , 2002, UAI.

[4]  Paul Newman,et al.  Know your limits: Embedding localiser performance models in teach and repeat maps , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Nick Barnes,et al.  Learning appearance models for road detection , 2013, 2013 IEEE Intelligent Vehicles Symposium (IV).

[6]  Wolfram Burgard,et al.  Monte Carlo Localization: Efficient Position Estimation for Mobile Robots , 1999, AAAI/IAAI.

[7]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[8]  Yoshiaki Shirai,et al.  Autonomous visual navigation of a mobile robot using a human-guided experience , 2002, Robotics Auton. Syst..

[9]  Paul Timothy Furgale,et al.  Visual teach and repeat for long‐range rover autonomy , 2010, J. Field Robotics.

[10]  P. S. Maybeck,et al.  The Kalman Filter: An Introduction to Concepts , 1990, Autonomous Robot Vehicles.

[11]  Timothy J. Robinson,et al.  Sequential Monte Carlo Methods in Practice , 2003 .

[12]  Roland Siegwart,et al.  Introduction to Autonomous Mobile Robots , 2004 .

[13]  F. Pukelsheim The Three Sigma Rule , 1994 .

[14]  Shin'ichi Yuta,et al.  Vision based navigation for mobile robots in indoor environment by teaching and playing-back scheme , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[15]  Nils J. Nilsson,et al.  A Formal Basis for the Heuristic Determination of Minimum Cost Paths , 1968, IEEE Trans. Syst. Sci. Cybern..

[16]  Sebastian Thrun,et al.  Probabilistic Algorithms in Robotics , 2000, AI Mag..

[17]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[18]  Paul Timothy Furgale,et al.  Visual Teach and Repeat using appearance-based lidar , 2011, 2012 IEEE International Conference on Robotics and Automation.

[19]  Shin'ichi Yuta,et al.  Autonomous navigation for mobile robots referring pre-recorded image sequence , 1996, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS '96.

[20]  Kurt Konolige,et al.  The Office Marathon: Robust navigation in an indoor office environment , 2010, 2010 IEEE International Conference on Robotics and Automation.