Lighting-Invariant Visual Odometry using Lidar Intensity Imagery and Pose Interpolation

Recent studies have demonstrated that images constructed from lidar reflectance information exhibit superior robustness to lighting changes in outdoor environments in comparison to traditional passive stereo camera imagery. Moreover, for visual navigation methods originally developed using stereo vision, such as visual odometry (VO) and visual teach and repeat (VT&R), scanning lidar can serve as a direct replacement for the passive sensor. This results in systems that retain the efficiency of the sparse, appearance-based techniques while overcoming the dependence on adequate/consistent lighting conditions required by traditional cameras. However, due to the scanning nature of the lidar and assumptions made in previous implementations, data acquired during continuous vehicle motion suffer from geometric motion distortion and can subsequently result in poor metric VO estimates, even over short distances (e.g., 5–10 m). This paper revisits the measurement timing assumption made in previous systems, and proposes a frame-to-frame VO estimation framework based on a novel pose interpolation scheme that explicitly accounts for the exact acquisition time of each feature measurement. In this paper, we present the promising preliminary results of our new method using data generated from a lidar simulator and experimental data collected from a planetary analogue environment with a real scanning laser rangefinder.

[1]  Ken Shoemake,et al.  Animating rotation with quaternion curves , 1985, SIGGRAPH.

[2]  Günther Schmidt,et al.  Fusing range and intensity images for mobile robot localization , 1999, IEEE Trans. Robotics Autom..

[3]  Andrew E. Johnson,et al.  Computer Vision on Mars , 2007, International Journal of Computer Vision.

[4]  Paul Timothy Furgale,et al.  Towards appearance-based methods for lidar sensors , 2011, 2011 IEEE International Conference on Robotics and Automation.

[5]  Christoph Stiller,et al.  Velodyne SLAM , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).

[6]  Paul Timothy Furgale,et al.  Gaussian Process Gauss-Newton: Non-Parametric State Estimation , 2012, 2012 Ninth Conference on Computer and Robot Vision.

[7]  C. Gauss Theory of the Motion of the Heavenly Bodies Moving About the Sun in Conic Sections , 1957 .

[8]  Eduardo Nebot,et al.  Localization and map building using laser range sensors in outdoor applications , 2000, J. Field Robotics.

[9]  F. Sebastian Grassia,et al.  Practical Parameterization of Rotations Using the Exponential Map , 1998, J. Graphics, GPU, & Game Tools.

[10]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[11]  Joachim Hertzberg,et al.  6D SLAM—3D mapping outdoor environments , 2007, J. Field Robotics.

[12]  Michael Bosse,et al.  Continuous 3D scan-matching with a spinning 2D laser , 2009, 2009 IEEE International Conference on Robotics and Automation.

[13]  Tom Duckett,et al.  Scan registration for autonomous mining vehicles using 3D‐NDT , 2007, J. Field Robotics.

[14]  Paul Timothy Furgale,et al.  Visual Teach and Repeat using appearance-based lidar , 2011, 2012 IEEE International Conference on Robotics and Automation.

[15]  John Enright,et al.  Visual odometry aided by a sun sensor and inclinometer , 2011, 2011 Aerospace Conference.

[16]  Paul Timothy Furgale,et al.  Continuous-time batch estimation using temporal basis functions , 2012, 2012 IEEE International Conference on Robotics and Automation.

[17]  Per-Erik Forssén,et al.  Rectifying rolling shutter video from hand-held devices , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[18]  J. Enright,et al.  Star tracking for planetary rovers , 2012, 2012 IEEE Aerospace Conference.