Combining photometric and depth data for lightweight and robust visual odometry

This paper presents a visual odometry system for mobile robots that works on RGB-D data from a Kinect/Xtion class sensor, and reports experimental results of evaluating this system on publicly available data. The aim of the presented research was to build a lightweight RGB-D visual odometry system, which can run in real-time on-board of such robots as walking machines that have limited computing resources. The proposed approach is based on tracking FAST keypoints over a sequence of RGB frames to establish correspondences between the photometric features in selected keyframes of the RGB-D data stream, and then on using the readily available depth data to map these features into 3D coordinates. The approach is tested on publicly available data, demonstrating satisfying performance with very low requirements as to the computing resources.

[1]  F. Fraundorfer,et al.  Visual Odometry : Part II: Matching, Robustness, Optimization, and Applications , 2012, IEEE Robotics & Automation Magazine.

[2]  Nico Blodow,et al.  Aligning point cloud views using persistent feature histograms , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  Wolfram Burgard,et al.  An evaluation of the RGB-D SLAM system , 2012, 2012 IEEE International Conference on Robotics and Automation.

[4]  W. Kabsch A solution for the best rotation to relate two sets of vectors , 1976 .

[5]  Dieter Fox,et al.  RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments , 2012, Int. J. Robotics Res..

[6]  Hans-Peter Kriegel,et al.  A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise , 1996, KDD.

[7]  Sunglok Choi,et al.  Performance Evaluation of RANSAC Family , 2009, BMVC.

[8]  Tom Drummond,et al.  Machine Learning for High-Speed Corner Detection , 2006, ECCV.

[9]  Michal R. Nowicki,et al.  Robust Registration of Kinect Range Data for Sensor Motion Estimation , 2013, CORES.

[10]  Albert S. Huang,et al.  Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments , 2012, Int. J. Robotics Res..

[11]  Seyedshams Feyzabadi,et al.  SLAM à la carte - GPGPU for Globally Consistent Scan Matching , 2011, ECMR.

[12]  K. S. Arun,et al.  Least-Squares Fitting of Two 3-D Point Sets , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  Wolfram Burgard,et al.  A benchmark for the evaluation of RGB-D SLAM systems , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Andrew W. Fitzgibbon,et al.  KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera , 2011, UIST.

[15]  Daniel Cremers,et al.  Real-time visual odometry from dense RGB-D images , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[16]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[17]  Dominik Belter,et al.  Rough terrain mapping and classification for foothold selection in a walking robot , 2010 .

[18]  John J. Leonard,et al.  Robust real-time visual odometry for dense RGB-D mapping , 2013, 2013 IEEE International Conference on Robotics and Automation.

[19]  Robert B. Fisher,et al.  Estimating 3-D rigid body transformations: a comparison of four major algorithms , 1997, Machine Vision and Applications.

[20]  Marek Kraft,et al.  Comparative assessment of point feature detectors in the context of robot navigation , 2013 .