Towards appearance-based methods for lidar sensors

Cameras have emerged as the dominant sensor modality for localization and mapping in three-dimensional, unstructured terrain, largely due to the success of sparse, appearance-based techniques, such as visual odometry. However, the Achilles' heel for all camera-based systems is their dependence on consistent ambient lighting, which poses a serious problem in outdoor environments that lack adequate or consistent light, such as the Moon. Actively illuminated sensors on the other hand, such as a light detection and ranging (lidar) device, use their own light source to illuminate the scene, making them a favourable alternative in light-denied environments. The purpose of this paper is to demonstrate that the largely successful appearance-based methods traditionally used with cameras can be applied to laser-based sensors, such as a lidar. We present two experiments that are vital to understanding and enabling appearance-based methods for lidar sensors. In the first experiment, we explore the stability of a representative keypoint detection and description algorithm on both camera images and lidar intensity images collected over a 24 hour period. In the second experiment, we validate our approach by implementing visual odometry based on sparse bundle adjustment on a sequence of lidar intensity images.

[1]  E. Nebot,et al.  Autonomous Navigation and Map building Using Laser Range Sensors in Outdoor Applications , 2000 .

[2]  Günther Schmidt,et al.  Fusing range and intensity images for mobile robot localization , 1999, IEEE Trans. Robotics Autom..

[3]  Timothy D. Barfoot,et al.  Visual teach and repeat for long-range rover autonomy , 2010 .

[4]  Asok K. Sen,et al.  Moiré patterns , 2000, Comput. Graph..

[5]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[6]  Joachim Hertzberg,et al.  Globally consistent 3D mapping with scan matching , 2008, Robotics Auton. Syst..

[7]  Paul Timothy Furgale,et al.  Visual teach and repeat for long‐range rover autonomy , 2010, J. Field Robotics.

[8]  Vincent Lepetit,et al.  View-based Maps , 2010, Int. J. Robotics Res..

[9]  Susanne Becker,et al.  Automatic Marker-Free Registration of Terrestrial Laser Scans using Reflectance Features , 2007 .

[10]  Larry H. Matthies,et al.  Error modeling in stereo navigation , 1986, IEEE J. Robotics Autom..

[11]  Claus Brenner,et al.  Registration of terrestrial laser scanning data using planar patches and image data , 2006 .

[12]  Joachim Hertzberg,et al.  Three-dimensional mapping with time-of-flight cameras , 2009 .

[13]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[14]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[15]  Christopher Hunt,et al.  Notes on the OpenSURF Library , 2009 .

[16]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[17]  Eduardo Nebot,et al.  Localization and map building using laser range sensors in outdoor applications , 2000, J. Field Robotics.

[18]  Kurt Konolige,et al.  Large-Scale Visual Odometry for Rough Terrain , 2007, ISRR.

[19]  Ian D. Reid,et al.  Vast-scale Outdoor Navigation Using Adaptive Relative Bundle Adjustment , 2010, Int. J. Robotics Res..

[20]  Michael Bosse,et al.  Continuous 3D scan-matching with a spinning 2D laser , 2009, 2009 IEEE International Conference on Robotics and Automation.