A multiple camera calibration and point cloud fusion tool for Kinect V2

Abstract This paper introduces a tool for calibrating multiple Kinect V2 sensors. To achieve the calibration, at least three acquisitions are needed from each camera. The method uses the Kinect's coordinate mapping capabilities between the sensors to register data between camera, depth, and color spaces. The method uses a novel approach where it obtains multiple 3D points matches between adjacent sensors, and use them to estimating the camera parameters. Once the cameras are calibrated, the tool can perform point cloud fusion transforming all the 3D points to a single reference. We tested the system with a network of four Kinect V2 sensors and present calibration results. The tool is implemented in MATLAB using the Kinect for Windows SDK 2.0.

[1]  Yosi Keller,et al.  Scale-Invariant Features for 3-D Mesh Models , 2012, IEEE Transactions on Image Processing.

[2]  Juan R. Terven,et al.  Kin2. A Kinect 2 toolbox for MATLAB , 2016, Sci. Comput. Program..

[3]  Daniel Cremers,et al.  Real-time human motion tracking using multiple depth cameras , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Zhengyou Zhang,et al.  Camera calibration with one-dimensional objects , 2002, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Alexandr Andoni,et al.  Nearest neighbor search : the old, the new, and the impossible , 2009 .

[6]  Wen Chih Chang,et al.  Real-Time 3D Rendering Based on Multiple Cameras and Point Cloud , 2014, 2014 7th International Conference on Ubi-Media Computing and Workshops.

[7]  D. C. Brown,et al.  Lens distortion for close-range photogrammetry , 1986 .

[8]  Emilio J. Almazan,et al.  Tracking People across Multiple Non-overlapping RGB-D Sensors , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[9]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Simon J. D. Prince,et al.  Computer Vision: Models, Learning, and Inference , 2012 .

[11]  Blair MacIntyre,et al.  RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units , 2014, UIST.

[12]  Petros Daras,et al.  Real-Time, Full 3-D Reconstruction of Moving Foreground Objects From Multiple Consumer Depth Cameras , 2013, IEEE Transactions on Multimedia.

[13]  Manuela Chessa,et al.  Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment , 2014, J. Vis. Commun. Image Represent..

[14]  Petros Daras,et al.  Reconstruction for 3D immersive virtual environments , 2012, 2012 13th International Workshop on Image Analysis for Multimedia Interactive Services.

[15]  Juho Kannala,et al.  Joint Depth and Color Camera Calibration with Distortion Correction , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Tomás Svoboda,et al.  A Convenient Multicamera Self-Calibration for Virtual Environments , 2005, Presence: Teleoperators & Virtual Environments.

[17]  O. Faugeras Three-dimensional computer vision: a geometric viewpoint , 1993 .

[18]  Henry Fuchs,et al.  Real-time volumetric 3D capture of room-sized scenes for telepresence , 2012, 2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON).

[19]  Petros Daras,et al.  Estimating human motion from multiple Kinect sensors , 2013, MIRAGE '13.

[20]  Marc Pollefeys,et al.  A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.