LiDAR-ToF-Binocular depth fusion using gradient priors

Most robotic systems face a complex environment in which a single vision sensor cannot fully sense surroundings. In this paper, we focus on how to combining the depth image of traditional binocular camera, novel ToF (time-of-flight) camera and emerging 16-line LiDAR (light detection and ranging), to accurately obtain a dense depth image. In order to unify the depth image to the same perspective of different sensors, we employ a simple method for extrinsic parameter calibration. Based on the unified depth image, a fast and accurate fusion algorithm is developed. Our experiments illustrate that the proposed method can greatly improve the depth density and accuracy, while keeping a fast running speed.

[1]  Young Min Kim,et al.  Multi-view image and ToF sensor fusion for dense 3D reconstruction , 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops.

[2]  Radu Horaud,et al.  High-resolution depth maps based on TOF-stereo fusion , 2012, 2012 IEEE International Conference on Robotics and Automation.

[3]  Daniel Cremers,et al.  Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[4]  Andreas Geiger,et al.  Automatic camera and range sensor calibration using a single shot , 2012, 2012 IEEE International Conference on Robotics and Automation.

[5]  Guido M. Cortelazzo,et al.  Probabilistic ToF and Stereo Data Fusion Based on Mixed Pixels Measurement Models , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[7]  Sebastian Thrun,et al.  LidarBoost: Depth superresolution for ToF 3D shape scanning , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Heiko Hirschmüller,et al.  Stereo Processing by Semiglobal Matching and Mutual Information , 2008, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Andreas Geiger,et al.  Efficient Large-Scale Stereo Matching , 2010, ACCV.

[10]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[11]  Xingming Wu,et al.  An adaptive stair-ascending gait generation approach based on depth camera for lower limb exoskeleton. , 2019, The Review of scientific instruments.

[12]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[13]  Vishnu Radhakrishnan,et al.  LiDAR-Camera Calibration using 3D-3D Point correspondences , 2017, ArXiv.

[14]  Takeshi Oishi,et al.  LiDAR and Camera Calibration Using Motions Estimated by Sensor Fusion Odometry , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[15]  Jianhua Wang,et al.  Real-Time Stairs Geometric Parameters Estimation for Lower Limb Rehabilitation Exoskeleton , 2018, 2018 Chinese Control And Decision Conference (CCDC).

[16]  Rafael Muñoz-Salinas,et al.  Speeded up detection of squared fiducial markers , 2018, Image Vis. Comput..

[17]  Ji Zhang,et al.  Low-drift and real-time lidar odometry and mapping , 2017, Auton. Robots.

[18]  Francisco José Madrid-Cuevas,et al.  Generation of fiducial marker dictionaries using Mixed Integer Linear Programming , 2016, Pattern Recognit..

[19]  Huimin Yan,et al.  Disparity search range estimation based on TOF-stereo fusion , 2015, Other Conferences.

[20]  Ayoung Kim,et al.  Direct Visual SLAM Using Sparse Depth for Camera-LiDAR System , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).