A linear method for recovering the depth of Ultra HD cameras using a kinect V2 sensor

Depth-Image-Based Rendering (DIBR) is a mature and important method for making free-viewpoint videos. As for the study of the DIBR approach, on the one hand, most of current research focuses on how to use it in systems with low resolution cameras, while a lot of Ultra HD rendering devices have been launched into markets. On the other hand, the quality and accuracy of the depth image directly affects the final rendering result. Therefore, in this paper we try to make some improvements on solving the problem of recovering the depth information for Ultra HD cameras with the help of a Kinect V2 sensor. To this end, a linear least squares method is proposed, which recovers the rigid transformation between a Kinect V2 and an Ultra HD camera, using the depth information from the Kinect V2 sensor. In addition, a non-linear coarse-to-fine method, which is based on Sparse Bundle Adjustment (SBA), is compared with this linear method. Experiments show that our proposed method performs better than the non-linear method for the Ultra HD depth image recovery both in computing time and precision.

[1]  Dah-Jye Lee,et al.  Review of stereo vision algorithms and their suitability for resource-limited systems , 2013, Journal of Real-Time Image Processing.

[2]  Peter H. N. de With,et al.  Free-viewpoint depth image based rendering , 2010, J. Vis. Commun. Image Represent..

[3]  HoraudRadu,et al.  An overview of depth cameras and range scanners based on time-of-flight technologies , 2016 .

[4]  Matteo Munaro,et al.  Performance evaluation of the 1st and 2nd generation Kinect for multimedia applications , 2015, 2015 IEEE International Conference on Multimedia and Expo (ICME).

[5]  C. Fehn,et al.  Interactive 3-DTV-Concepts and Key Technologies , 2006 .

[6]  Ling Shao,et al.  RGB-D datasets using microsoft kinect or similar sensors: a survey , 2017, Multimedia Tools and Applications.

[7]  Remo Sala,et al.  A metrological characterization of the Kinect V2 time-of-flight camera , 2016, Robotics Auton. Syst..

[8]  Manolis I. A. Lourakis,et al.  SBA: A software package for generic sparse bundle adjustment , 2009, TOMS.

[9]  Marc Levoy,et al.  Using plane + parallax for calibrating dense camera arrays , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[10]  In-So Kweon,et al.  Time-of-Flight Sensor Calibration for a Color and Depth Camera Pair , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Alexei A. Efros,et al.  Depth Estimation with Occlusion Modeling Using Light-Field Cameras , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Hans-Peter Seidel,et al.  Efficient Multi‐image Correspondences for On‐line Light Field Video Processing , 2016, Comput. Graph. Forum.

[14]  Pietro Zanuttigh,et al.  A Novel Interpolation Scheme for Range Data with Side Information , 2009, 2009 Conference for Visual Media Production.

[15]  Reinhard Koch,et al.  Time-of-Flight sensor calibration for accurate range sensing , 2010, Comput. Vis. Image Underst..

[16]  Daniel Herrera C,et al.  Joint depth and color camera calibration with distortion correction. , 2012, IEEE transactions on pattern analysis and machine intelligence.

[17]  Radu Horaud,et al.  Cross-calibration of time-of-flight and colour cameras , 2014, Comput. Vis. Image Underst..

[18]  Xiaoyan Hu,et al.  A Quantitative Evaluation of Confidence Measures for Stereo Vision , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Zhengyou Zhang,et al.  Calibration between depth and color sensors for commodity depth cameras , 2011, 2011 IEEE International Conference on Multimedia and Expo.