Multi-modal local terrain maps from vision and LiDAR

In this paper, we present a method to build precise local terrain maps for an autonomous vehicle from vision and LiDAR that surpass most existing maps used in the field, both in the details they represent and in efficiency of construction. The high level of detail is obtained by spatio-temporal fusion of data from multiple, complementary sensors in a grid map. The map not only consists of obstacle probabilities, but contains different features of the environment: elevation, color, infrared reflectivity, terrain slopes and surface roughness. Still, an efficient way to manage the map's memory allows us to build the maps online on-board our autonomous vehicle. As we demonstrate by describing some of its applications, the maps can serve as a unified representation to solve further perception and navigation problems without having to resort to individual sensor data again. The proposed terrain maps have proven their robustness and precision in many real-world scenarios, leading to award-winning performances at international robotics competitions.

[1]  Ian D. Reid,et al.  Real-Time Monocular SLAM with Straight Lines , 2006, BMVC.

[2]  Markus Maurer,et al.  Stadtpilot: Driving autonomously on Braunschweig's inner ring road , 2010, 2010 IEEE Intelligent Vehicles Symposium.

[3]  Dietrich Paulus,et al.  Terrain drivability analysis in 3D laser range data for autonomous robot navigation in unstructured environments , 2009, 2009 IEEE Conference on Emerging Technologies & Factory Automation.

[4]  Michael Himmelsbach,et al.  Fusing vision and LIDAR - Synchronization, correction and occlusion reasoning , 2010, 2010 IEEE Intelligent Vehicles Symposium.

[5]  Michael Himmelsbach,et al.  Autonomous Offroad Navigation Under Poor GPS Conditions , 2009 .

[6]  Wolfram Burgard,et al.  OctoMap: an efficient probabilistic 3D mapping framework based on octrees , 2013, Autonomous Robots.

[7]  Sebastian Thrun,et al.  Junior: The Stanford entry in the Urban Challenge , 2008, J. Field Robotics.

[8]  Hans-Joachim Wünsche,et al.  Trajectory planning for car-like robots in unknown, unstructured environments , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Alonzo Kelly,et al.  Rough Terrain Autonomous Mobility—Part 2: An Active Vision, Predictive Control Approach , 1998, Auton. Robots.

[10]  Sebastian Thrun,et al.  Stanley: The robot that won the DARPA Grand Challenge: Research Articles , 2006 .

[11]  Carl-Fredrik Westin,et al.  Normalized and differential convolution , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[12]  Hans P. Moravec,et al.  High resolution maps from wide angle sonar , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[13]  Hans-Joachim Wuensche,et al.  Following Dirt Roads at Night Time: Sensors and Features for Lane Recognition and Tracking , 2015, IROS 2015.

[14]  Michael Himmelsbach,et al.  Tracking and classification of arbitrary objects with bottom-up/top-down detection , 2012, 2012 IEEE Intelligent Vehicles Symposium.

[15]  William Whittaker,et al.  Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.

[16]  Jürgen Dickmann,et al.  Dynamic level of detail 3D occupancy grids for automotive use , 2010, 2010 IEEE Intelligent Vehicles Symposium.

[17]  Sven Behnke,et al.  Sancta simplicitas - on the efficiency and achievable results of SLAM using ICP-based incremental registration , 2010, 2010 IEEE International Conference on Robotics and Automation.

[18]  Fabio Tozeto Ramos,et al.  Gaussian process occupancy maps* , 2012, Int. J. Robotics Res..

[19]  Ingrid Daubechies,et al.  The wavelet transform, time-frequency localization and signal analysis , 1990, IEEE Trans. Inf. Theory.

[20]  Michael Himmelsbach,et al.  Detection and tracking of road networks in rural terrain by fusing vision and LIDAR , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Fabio Tozeto Ramos,et al.  Hilbert maps: scalable continuous occupancy mapping with stochastic gradient descent , 2015, Robotics: Science and Systems.

[22]  Michael Beetz,et al.  Leaving Flatland: Efficient real‐time three‐dimensional perception and motion planning , 2009, J. Field Robotics.

[23]  Hans-Joachim Wünsche,et al.  Detection and tracking of rural crossroads combining vision and LiDAR measurements , 2014, 17th International IEEE Conference on Intelligent Transportation Systems (ITSC).

[24]  Joachim Hertzberg,et al.  6D SLAM—3D mapping outdoor environments , 2007, J. Field Robotics.

[25]  Michael Himmelsbach,et al.  Driving with Tentacles - Integral Structures for Sensing and Motion , 2008, The DARPA Urban Challenge.

[26]  Julius Ziegler,et al.  Team AnnieWAY's Autonomous System for the DARPA Urban Challenge 2007 , 2009, The DARPA Urban Challenge.

[27]  Michael Himmelsbach,et al.  Fast segmentation of 3D point clouds for ground vehicles , 2010, 2010 IEEE Intelligent Vehicles Symposium.

[28]  Christoph Stiller,et al.  Velodyne SLAM , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).

[29]  Hans-Joachim Wünsche,et al.  Fast and robust b-spline terrain estimation for off-road navigation with stereo vision , 2014, 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC).

[30]  Surya P. N. Singh,et al.  Hybrid elevation maps: 3D surface models for segmentation , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[31]  M. Goebl,et al.  A Real-Time-capable Hard-and Software Architecture for Joint Image and Knowledge Processing in Cognitive Automobiles , 2007, 2007 IEEE Intelligent Vehicles Symposium.

[32]  Paul H. J. Kelly,et al.  SLAM++: Simultaneous Localisation and Mapping at the Level of Objects , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.