Relative continuous-time SLAM

Appearance-based techniques for simultaneous localization and mapping (SLAM) have been highly successful in assisting robot-motion estimation; however, these vision-based technologies have long assumed the use of imaging sensors with a global shutter, which are well suited to the traditional, discrete-time formulation of visual problems. In order to adapt these technologies to use scanning sensors, we propose novel methods for both outlier rejection and batch nonlinear estimation. Traditionally, the SLAM problem has been formulated in a single-privileged coordinate frame, which can become computationally expensive over long distances, particularly when a loop closure requires the adjustment of many pose variables. Recent discrete-time estimators have shown that a completely relative coordinate framework can be used to incrementally find a close approximation of the full maximum-likelihood solution in constant time. In order to use scanning sensors, we propose moving the relative coordinate formulation of SLAM into continuous time by estimating the velocity profile of the robot. We derive the relative formulation of the continuous-time robot trajectory and formulate an estimator using temporal basis functions. A motion-compensated outlier rejection scheme is proposed by using a constant-velocity model for the random sample consensus algorithm. Our experimental results use intensity imagery from a two-axis scanning lidar; due to the sensors’ scanning nature, it behaves similarly to a slow rolling-shutter camera. Both algorithms are validated using a sequence of 6880 lidar frames acquired over a 1.1 km traversal.

[1]  Frank Dellaert,et al.  Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing , 2006, Int. J. Robotics Res..

[2]  Paul Timothy Furgale,et al.  Towards appearance-based methods for lidar sensors , 2011, 2011 IEEE International Conference on Robotics and Automation.

[3]  Larry H. Matthies,et al.  Two years of Visual Odometry on the Mars Exploration Rovers , 2007, J. Field Robotics.

[4]  Michael Bosse,et al.  Zebedee: Design of a Spring-Mounted 3-D Range Sensor with Application to Mobile Mapping , 2012, IEEE Transactions on Robotics.

[5]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[6]  Chao Jia,et al.  Probabilistic 3-D motion estimation for rolling shutter video rectification from visual and inertial measurements , 2012, 2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP).

[7]  Hang Dong,et al.  Into Darkness: Visual Navigation Based on a Lidar-Intensity-Image Pipeline , 2013, ISRR.

[8]  Larry Matthies,et al.  Two years of Visual Odometry on the Mars Exploration Rovers: Field Reports , 2007 .

[9]  Richard M. Murray,et al.  A Mathematical Introduction to Robotic Manipulation , 1994 .

[10]  Michael Felsberg,et al.  Structure and motion estimation from rolling shutter video , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[11]  Paul Timothy Furgale,et al.  Visual teach and repeat for long‐range rover autonomy , 2010, J. Field Robotics.

[12]  Gabe Sibley,et al.  Spline Fusion: A continuous-time representation for visual-inertial fusion with application to rolling shutter cameras , 2013, BMVC.

[13]  Hang Dong,et al.  Pose Interpolation for Laser‐based Visual Odometry , 2014, J. Field Robotics.

[14]  Paul Newman,et al.  Practice makes perfect? Managing and leveraging visual experiences for lifelong navigation , 2012, 2012 IEEE International Conference on Robotics and Automation.

[15]  Sung Yong Shin,et al.  A general construction scheme for unit quaternion curves with simple high order derivatives , 1995, SIGGRAPH.

[16]  Ian D. Reid,et al.  A hybrid SLAM representation for dynamic marine environments , 2010, 2010 IEEE International Conference on Robotics and Automation.

[17]  Michael Bosse,et al.  Efficient Large‐scale Three‐dimensional Mobile Mapping for Underground Mines , 2014, J. Field Robotics.

[18]  Edwin Olson,et al.  Fast iterative alignment of pose graphs with poor initial estimates , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[19]  Berthold K. P. Horn,et al.  Closed-form solution of absolute orientation using unit quaternions , 1987 .

[20]  Vincent Lepetit,et al.  View-based Maps , 2010, Int. J. Robotics Res..

[21]  Andrew W. Fitzgibbon,et al.  Bundle Adjustment - A Modern Synthesis , 1999, Workshop on Vision Algorithms.

[22]  Ian D. Reid,et al.  Adaptive relative bundle adjustment , 2009, Robotics: Science and Systems.

[23]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[24]  Paul Timothy Furgale,et al.  Associating Uncertainty With Three-Dimensional Poses for Use in Estimation Problems , 2014, IEEE Transactions on Robotics.

[25]  Tim D. Barfoot,et al.  Towards relative continuous-time SLAM , 2013, 2013 IEEE International Conference on Robotics and Automation.

[26]  Frank Dellaert,et al.  A hierarchical wavelet decomposition for continuous-time SLAM , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[27]  S. Umeyama,et al.  Least-Squares Estimation of Transformation Parameters Between Two Point Patterns , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[28]  Paul Timothy Furgale,et al.  Gaussian Process Gauss–Newton for non-parametric simultaneous localization and mapping , 2013, Int. J. Robotics Res..

[29]  Tim D. Barfoot,et al.  RANSAC for motion-distorted 3D visual sensors , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[30]  K. S. Arun,et al.  Least-Squares Fitting of Two 3-D Point Sets , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[31]  Darius Burschka,et al.  Optimization based IMU camera calibration , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[32]  Roland Siegwart,et al.  Rolling Shutter Camera Calibration , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[33]  Michael Bosse,et al.  Efficient Large-Scale 3D Mobile Mapping and Surface Reconstruction of an Underground Mine , 2012, FSR.

[34]  Paul Timothy Furgale,et al.  Towards lighting-invariant visual navigation: An appearance-based approach using scanning laser-rangefinders , 2013, Robotics Auton. Syst..

[35]  Simo Särkkä,et al.  Batch Continuous-Time Trajectory Estimation as Exactly Sparse Gaussian Process Regression , 2014, Robotics: Science and Systems.

[36]  Ian D. Reid,et al.  Vast-scale Outdoor Navigation Using Adaptive Relative Bundle Adjustment , 2010, Int. J. Robotics Res..

[37]  Michael Bosse,et al.  Continuous 3D scan-matching with a spinning 2D laser , 2009, 2009 IEEE International Conference on Robotics and Automation.

[38]  François Berry,et al.  Structure and kinematics triangulation with a rolling shutter stereo rig , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[39]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[40]  Robert M. Haralick,et al.  Review and analysis of solutions of the three point perspective pose estimation problem , 1994, International Journal of Computer Vision.

[41]  David Nistér,et al.  An efficient solution to the five-point relative pose problem , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[42]  Hang Dong,et al.  Lighting-Invariant Visual Odometry using Lidar Intensity Imagery and Pose Interpolation , 2012, FSR.

[43]  Philippe Martinet,et al.  Simultaneous Object Pose and Velocity Computation Using a Single View from a Rolling Shutter Camera , 2006, ECCV.

[44]  Wolfram Burgard,et al.  Probabilistic Robotics (Intelligent Robotics and Autonomous Agents) , 2005 .

[45]  Paul Timothy Furgale,et al.  Continuous-time batch estimation using temporal basis functions , 2012, 2012 IEEE International Conference on Robotics and Automation.

[46]  Wolfram Burgard,et al.  A Tree Parameterization for Efficiently Computing Maximum Likelihood Maps using Gradient Descent , 2007, Robotics: Science and Systems.

[47]  Paul Newman,et al.  FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance , 2008, Int. J. Robotics Res..

[48]  Gordon Wyeth,et al.  OpenFABMAP: An open source toolbox for appearance-based loop closure detection , 2012, 2012 IEEE International Conference on Robotics and Automation.

[49]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[50]  Michael Felsberg,et al.  Rolling shutter bundle adjustment , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.