Accurate extrinsic calibration between monocular camera and sparse 3D Lidar points without markers

It is of practical interest to automatically calibrate the multiple sensors in autonomous vehicles. In this paper, we deal with an interesting case when used low-resolution Lidar and present a practical approach to extrinsic calibration between monocular camera and Lidar with sparse 3D measurements. We formulate the problem as directly minimizing the feature error evaluated between frames following the way of image warping. To overcome the difficulties in the optimization problem, we propose to use the distance transform and further projection error model to obtain the key approximated edge points that are sensitive to the loss function. Finally, the loss minimization is solved by an efficient random selection algorithm. Experimental results on KITTI dataset show that our proposed method can achieve competitive results and an improvement in translation estimation particularly.

[1]  Radu Horaud,et al.  Robot Hand-Eye Calibration Using Structure-from-Motion , 2001, Int. J. Robotics Res..

[2]  Paul Newman,et al.  Choosing a time and place for calibration of lidar-camera systems , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Silvio Savarese,et al.  Automatic Targetless Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information , 2012, AAAI.

[4]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[6]  Nikolaus Hansen,et al.  The CMA Evolution Strategy: A Tutorial , 2016, ArXiv.

[7]  Juan I. Nieto,et al.  Automatic Calibration of Lidar and Camera Images using Normalized Mutual Information , 2012 .

[8]  Zoltan Kato,et al.  Targetless Calibration of a Lidar - Perspective Camera Pair , 2013, 2013 IEEE International Conference on Computer Vision Workshops.

[9]  Calvin R. Maurer,et al.  A Linear Time Algorithm for Computing Exact Euclidean Distance Transforms of Binary Images in Arbitrary Dimensions , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Xiaomin Duan,et al.  Riemannian Means on Special Euclidean Group and Unipotent Matrices Group , 2013, TheScientificWorldJournal.

[11]  Qiao Wang,et al.  VirtualWorlds as Proxy for Multi-object Tracking Analysis , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[13]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[14]  Juan I. Nieto,et al.  Motion-Based Calibration of Multimodal Sensor Extrinsics and Timing Offset Estimation , 2016, IEEE Transactions on Robotics.

[15]  David Nister,et al.  Alignment of continuous video onto 3D point clouds , 2004, CVPR 2004.

[16]  Sebastian Thrun,et al.  Automatic Online Calibration of Cameras and Lasers , 2013, Robotics: Science and Systems.

[17]  Daniel Cremers,et al.  LSD-SLAM: Large-Scale Direct Monocular SLAM , 2014, ECCV.