Monocular Direct Sparse Localization in a Prior 3D Surfel Map

In this paper, we introduce an approach to tracking the pose of a monocular camera in a prior surfel map. By rendering vertex and normal maps from the prior surfel map, the global planar information for the sparse tracked points in the image frame is obtained. The tracked points with and without the global planar information involve both global and local constraints of frames to the system. Our approach formulates all constraints in the form of direct photometric errors within a local window of the frames. The final optimization utilizes these constraints to provide the accurate estimation of global 6-DoF camera poses with the absolute scale. The extensive simulation and real-world experiments demonstrate that our monocular method can provide accurate camera localization results under various conditions.

[1]  Hauke Strasdat,et al.  Scale Drift-Aware Large Scale Monocular SLAM , 2010, Robotics: Science and Systems.

[2]  Stefan Leutenegger,et al.  ElasticFusion: Real-time dense SLAM and light source estimation , 2016, Int. J. Robotics Res..

[3]  Guoquan Huang,et al.  Visual-Inertial Localization With Prior LiDAR Map Constraints , 2019, IEEE Robotics and Automation Letters.

[4]  Stefan Schubert,et al.  Sampling-based methods for visual navigation in 3D maps by synthesizing depth images , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[5]  Paul Newman,et al.  FARLAP: Fast robust localisation using appearance priors , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Stefano Soatto,et al.  Visual-inertial navigation, mapping and localization: A scalable real-time causal approach , 2011, Int. J. Robotics Res..

[7]  Daniel Cremers,et al.  Direct Sparse Odometry , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[9]  Brendan Englot,et al.  LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[10]  Radu Bogdan Rusu,et al.  Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments , 2010, KI - Künstliche Intelligenz.

[11]  Winston Churchill,et al.  Experience-based navigation for long-term localisation , 2013, Int. J. Robotics Res..

[12]  Stergios I. Roumeliotis,et al.  A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[13]  Germán Ros,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[14]  S. Umeyama,et al.  Least-Squares Estimation of Transformation Parameters Between Two Point Patterns , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Paul Newman,et al.  Robust Direct Visual Localisation using Normalised Information Distance , 2015, BMVC.

[16]  Stergios I. Roumeliotis,et al.  A First-Estimates Jacobian EKF for Improving SLAM Consistency , 2009, ISER.

[17]  Ming Liu,et al.  Tightly Coupled 3D Lidar Inertial Odometry and Mapping , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[18]  Ryan M. Eustice,et al.  Visual localization within LIDAR maps for automated urban driving , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[19]  Yan Lu,et al.  Monocular localization in urban environments using road markings , 2017, 2017 IEEE Intelligent Vehicles Symposium (IV).

[20]  Michael Bosse,et al.  Keyframe-based visual–inertial odometry using nonlinear optimization , 2015, Int. J. Robotics Res..

[21]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[22]  Ji Zhang,et al.  LOAM: Lidar Odometry and Mapping in Real-time , 2014, Robotics: Science and Systems.

[23]  Ming Liu,et al.  Metric Monocular Localization Using Signed Distance Fields , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[24]  Roland Siegwart,et al.  The EuRoC micro aerial vehicle datasets , 2016, Int. J. Robotics Res..

[25]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[26]  Daniel Cremers,et al.  Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM , 2017, IEEE Robotics and Automation Letters.

[27]  Wolfram Burgard,et al.  Monocular camera localization in 3D LiDAR maps , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[28]  Jinyong Jeong,et al.  Stereo Camera Localization in 3D LiDAR Maps , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[29]  Wolfram Burgard,et al.  A benchmark for the evaluation of RGB-D SLAM systems , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[30]  Wolfgang Straßer,et al.  Registration of colored 3D point clouds with a Kernel-based extension to the normal distributions transform , 2008, 2008 IEEE International Conference on Robotics and Automation.

[31]  Xiaqing Ding,et al.  Laser Map Aided Visual Inertial Localization in Changing Environment , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[32]  Dezhen Song,et al.  Sharing Heterogeneous Spatial Knowledge: Map Fusion Between Asynchronous Monocular Vision and Lidar or Other Prior Inputs , 2019, ISRR.