Multiple Kinect V2 Calibration

In this paper, we propose a method to easily calibrate multiple Kinect V2 sensors. It requires the cameras to simultaneously observe a 1D object shown at different orientations (three at least) or a 2D object for at least one acquisition. This is possible due to the built-in coordinate mapping capabilities of the Kinect. Our method follows five steps: image acquisition, pre-calibration, point cloud matching, intrinsic parameters initialization, and final calibration. We modeled radial and distortion parameters of all the cameras, obtaining a root mean square re-projection error of 0.2 pixels on the depth cameras and 0.4 pixels on the color cameras. To validate the calibration results we performed point cloud fusion with color and 3D reconstruction using the depth and color information from four Kinect sensors.

[1]  D. Marquardt An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .

[2]  Juan R. Terven,et al.  Kin2. A Kinect 2 toolbox for MATLAB , 2016, Sci. Comput. Program..

[3]  Pietro Cerveri,et al.  Calibrating a video camera pair with a rigid bar , 2000, Pattern Recognit..

[4]  Kun Peng,et al.  Enhanced personal autostereoscopic telepresence system using commodity depth cameras , 2012, Comput. Graph..

[5]  Andreas Kolb,et al.  Kinect range sensing: Structured-light versus Time-of-Flight Kinect , 2015, Comput. Vis. Image Underst..

[6]  D. C. Brown,et al.  Lens distortion for close-range photogrammetry , 1986 .

[7]  Henry Fuchs,et al.  Real-time volumetric 3D capture of room-sized scenes for telepresence , 2012, 2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON).

[8]  Hrvoje Benko,et al.  Combining multiple depth cameras and projectors for interactions on, above and between surfaces , 2010, UIST.

[9]  Petros Daras,et al.  Real-Time, Full 3-D Reconstruction of Moving Foreground Objects From Multiple Consumer Depth Cameras , 2013, IEEE Transactions on Multimedia.

[10]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[11]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Kai Oliver Arras,et al.  People tracking in RGB-D data with on-line boosted target models , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Daniel Cremers,et al.  Real-time human motion tracking using multiple depth cameras , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Alexandr Andoni,et al.  Nearest neighbor search : the old, the new, and the impossible , 2009 .

[15]  Manuela Chessa,et al.  Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment , 2014, J. Vis. Commun. Image Represent..

[16]  Kenneth Levenberg A METHOD FOR THE SOLUTION OF CERTAIN NON – LINEAR PROBLEMS IN LEAST SQUARES , 1944 .

[17]  Petros Daras,et al.  Reconstruction for 3D immersive virtual environments , 2012, 2012 13th International Workshop on Image Analysis for Multimedia Interactive Services.

[18]  Blair MacIntyre,et al.  RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units , 2014, UIST.

[19]  Emilio J. Almazan,et al.  Tracking People across Multiple Non-overlapping RGB-D Sensors , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[20]  S. P. Mudur,et al.  Three-dimensional computer vision: a geometric viewpoint , 1993 .

[21]  Wen Chih Chang,et al.  Real-Time 3D Rendering Based on Multiple Cameras and Point Cloud , 2014, 2014 7th International Conference on Ubi-Media Computing and Workshops.

[22]  Zhengyou Zhang,et al.  Camera calibration with one-dimensional objects , 2002, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  Henry Fuchs,et al.  Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[24]  Duane C. Brown,et al.  Close-Range Camera Calibration , 1971 .

[25]  Petros Daras,et al.  Estimating human motion from multiple Kinect sensors , 2013, MIRAGE '13.

[26]  Yangyu Fan,et al.  A new method for calibrating depth and color camera pair based on Kinect , 2012, 2012 International Conference on Audio, Language and Image Processing.

[27]  Zhengyou Zhang,et al.  Iterative point matching for registration of free-form curves and surfaces , 1994, International Journal of Computer Vision.

[28]  Franck Multon,et al.  Multiple depth cameras calibration and body volume reconstruction for gait analysis , 2012, 2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA).

[29]  Peter F. Sturm,et al.  Calibration of 3D kinematic systems using orthogonality constraints , 2007, Machine Vision and Applications.

[30]  Peter H. N. de With,et al.  Employing a RGB-D sensor for real-time tracking of humans across multiple re-entries in a smart environment , 2012, IEEE Transactions on Consumer Electronics.

[31]  Tomás Svoboda,et al.  A Convenient Multicamera Self-Calibration for Virtual Environments , 2005, Presence: Teleoperators & Virtual Environments.

[32]  Marcus A. Magnor,et al.  Markerless Motion Capture using multiple Color-Depth Sensors , 2011, VMV.

[33]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[34]  Uwe D. Hanebeck,et al.  Intelligent sensor-scheduling for multi-kinect-tracking , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[35]  Oliver G. Staadt,et al.  Calibration of Depth Camera Arrays , 2014, SIGRAD.

[36]  P. Payeur,et al.  Calibration of a network of Kinect sensors for robotic inspection over a large workspace , 2013, 2013 IEEE Workshop on Robot Vision (WORV).