Fusion of TLS and RGB point clouds with TIR images for indoor mobile mapping

Obtaining accurate 3D descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3D data from another sensor is able to overcome most of the limitations in the 3D geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras and profile laserscanners is suitable. As a laserscanner is an active sensor in the visible red or near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications are independent from external illumination or textures in the scene. This contribution focusses on the fusion of point clouds from terrestrial laserscanners and RGB cameras with images from thermal infrared mounted together on a robot for indoor 3D reconstruction. The system is geometrical calibrated including the lever arm between the different sensors. As the field of view is different for the sensors, the different sensors record the same scene points not exactly at the same time. Thus, the 3D scene points of the laserscanner and the photogrammetric point cloud from the RGB camera have to be synchronized before point cloud fusion and adding the thermal channel to the 3D points.

[1]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[2]  Richard Szeliski,et al.  Modeling the World from Internet Photo Collections , 2008, International Journal of Computer Vision.

[3]  Uwe Stilla,et al.  Automatic 3D reconstruction and texture extraction for 3D building models from thermal infrared image sequences. , 2016 .

[4]  Zhengyou Zhang,et al.  Flexible camera calibration by viewing a plane from unknown orientations , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[5]  T. Luhmann,et al.  GEOMETRIC CALIBRATION OF THERMOGRAPHIC CAMERAS , 2013 .

[6]  Uwe Stilla,et al.  THERMAL 3D MAPPING FOR OBJECT DETECTION IN DYNAMIC SCENES , 2014 .

[7]  T. Luhmann,et al.  INVESTIGATIONS ON A COMBINED RGB / TIME-OF-FLIGHT APPROACH FOR CLOSE RANGE APPLICATIONS , 2012 .

[8]  Boris Jutzi,et al.  A STEP TOWARDS DYNAMIC SCENE ANALYSIS WITH ACTIVE MULTI-VIEW RANGE IMAGING SYSTEMS , 2012 .

[9]  Heiko Hirschmüller,et al.  Dense 3D Reconstruction from Wide Baseline Image Sets , 2011, Theoretical Foundations of Computer Vision.

[10]  Ruigang Yang,et al.  Automatic Real-Time Video Matting Using Time-of-Flight Camera and Multichannel Poisson Equations , 2012, International Journal of Computer Vision.

[11]  Heiko Hirschmüller,et al.  Stereo Processing by Semiglobal Matching and Mutual Information , 2008, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Severo Ochoa,et al.  A Segment-based Registration Technique for Visual-IR Images , 2000 .

[13]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[14]  Marc Levoy,et al.  Efficient variants of the ICP algorithm , 2001, Proceedings Third International Conference on 3-D Digital Imaging and Modeling.

[15]  Bodo Rosenhahn,et al.  Outdoor and Large-Scale Real-World Scene Analysis , 2012, Lecture Notes in Computer Science.

[16]  Jan-Michael Frahm,et al.  Detailed Real-Time Urban 3D Reconstruction from Video , 2007, International Journal of Computer Vision.

[17]  Kyung-Hoon Bae,et al.  Image fusion in infrared image and visual image using normalized mutual information , 2008, SPIE Defense + Commercial Sensing.

[18]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .