Extrinsic calibration of a single line scanning lidar and a camera

Lidar and visual imagery have been broadly utilized in computer vision and mobile robotics applications because these sensors provide complementary information. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust-weighted extrinsic calibration algorithm that is implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration data sets. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm, such as comparison of the RMS distance of the ground truth and the projected points, the effect of the number of lidar scan and image, and the effect of pose and range of the calibration target. In the experiments, we show our extrinsic calibration algorithm has calibration accuracy over 50% better than an existing state of the art approach. To evaluate the generality of our algorithm, we also colorize point clouds with different pairs of lidars and cameras calibrated by our algorithm.

[1]  Salvatore Strano,et al.  A method for the calibration of a 3-D laser scanner , 2011 .

[2]  D. Marquardt An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .

[3]  Takeo Kanade,et al.  Boundary detection based on supervised learning , 2010, 2010 IEEE International Conference on Robotics and Automation.

[4]  Frederick R. Forst,et al.  On robust estimation of the location parameter , 1980 .

[5]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Robert Pless,et al.  Extrinsic calibration of a camera and laser range finder (improves camera calibration) , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[7]  Roland Siegwart,et al.  Human detection using multimodal and multidimensional features , 2008, 2008 IEEE International Conference on Robotics and Automation.

[8]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[9]  Dieter Fox,et al.  A Spatio-Temporal Probabilistic Model for Multi-Sensor Multi-Class Object Recognition , 2007, ISRR.

[10]  P. J. Huber Robust Estimation of a Location Parameter , 1964 .

[11]  Avinash C. Kak,et al.  Modeling and calibration of a structured light scanner for 3-D robot vision , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[12]  Olivier Strauss,et al.  Calibration of a multi-sensor system laser rangefinder/camera , 1995, Proceedings of the Intelligent Vehicles '95. Symposium.

[13]  Yunhui Liu,et al.  An algorithm for extrinsic parameters calibration of a camera and a laser range finder using line features , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Gene H. Golub,et al.  An analysis of the total least squares problem , 1980, Milestones in Matrix Computation.

[15]  Jean-Yves Bouguet,et al.  Camera calibration toolbox for matlab , 2001 .

[16]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[17]  Kenneth Levenberg A METHOD FOR THE SOLUTION OF CERTAIN NON – LINEAR PROBLEMS IN LEAST SQUARES , 1944 .

[18]  Cristiano Premebida,et al.  LIDAR and vision‐based pedestrian detection system , 2009, J. Field Robotics.