A novel kinect V2 registration method for large-displacement environments using camera and scene constraints

In a lot of multi-Kinect V2-based systems, the registration of these Kinect V2 sensors is an important step which directly affects the system precision. The coarse-to-fine method using calibration objects is an effective way to solve the Kinect V2 registration problem. However, for the registration of Kinect V2 cameras with large displacements, this kind of method may fail. To this end, a novel Kinect V2 registration method, which is also based on the coarse-to-fine framework, is proposed by using camera and scene constraints. Specifically, in the coarse estimation stage, scene constraints are explored using off-the-shelf feature point detectors and camera constraints are explored using homography and fundamental matrices. In the estimation refinement stage, an Iterative Closest Point (ICP)-based point cloud registration method is utilized. Experimental results show that the proposed Kinect V2 registration method using camera and scene constraints performs much better in precision than using calibration objects in the large-displacement environment.

[1]  Yosi Keller,et al.  Scale-Invariant Features for 3-D Mesh Models , 2012, IEEE Transactions on Image Processing.

[2]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[3]  Andreas Kolb,et al.  Kinect range sensing: Structured-light versus Time-of-Flight Kinect , 2015, Comput. Vis. Image Underst..

[4]  Hong Liu,et al.  Scene-Adaptive Hierarchical Data Association for Multiple Objects Tracking , 2014, IEEE Signal Processing Letters.

[5]  Alexandr Andoni,et al.  Nearest neighbor search : the old, the new, and the impossible , 2009 .

[6]  Daniel Cremers,et al.  Real-time human motion tracking using multiple depth cameras , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Seung-Won Jung,et al.  Depth completion for kinect v2 sensor , 2016, Multimedia Tools and Applications.

[8]  Manuela Chessa,et al.  Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment , 2014, J. Vis. Commun. Image Represent..

[9]  Didier Stricker,et al.  CoRBS: Comprehensive RGB-D benchmark for SLAM using Kinect v2 , 2016, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV).

[10]  Reinhard Koch,et al.  A linear method for recovering the depth of Ultra HD cameras using a kinect V2 sensor , 2017, 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA).

[11]  Tobias Höllerer,et al.  Evaluation of Interest Point Detectors and Feature Descriptors for Visual Tracking , 2011, International Journal of Computer Vision.

[12]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Matteo Munaro,et al.  OpenPTrack: Open source multi-camera calibration and people tracking for RGB-D camera networks , 2016, Robotics Auton. Syst..

[14]  Didier Stricker,et al.  Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision , 2016, ACCV Workshops.

[15]  N. Altman An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression , 1992 .

[16]  Adam Schmidt,et al.  Toward evaluation of visual navigation algorithms on RGB-D data from the first- and second-generation Kinect , 2016, Machine Vision and Applications.

[17]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  K. S. Arun,et al.  Least-Squares Fitting of Two 3-D Point Sets , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Binoy Pinto,et al.  Speeded Up Robust Features , 2011 .

[20]  Bernd Fröhlich,et al.  Volumetric calibration and registration of multiple RGBD-sensors into a joint coordinate system , 2015, 2015 IEEE Symposium on 3D User Interfaces (3DUI).

[21]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[22]  Ioannis Patras,et al.  A flexible calibration method of multiple Kinects for 3D human reconstruction , 2015, 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW).

[23]  Matteo Munaro,et al.  Performance evaluation of the 1st and 2nd generation Kinect for multimedia applications , 2015, 2015 IEEE International Conference on Multimedia and Expo (ICME).

[24]  Marek Kowalski,et al.  Livescan3D: A Fast and Inexpensive 3D Data Acquisition System for Multiple Kinect v2 Sensors , 2015, 2015 International Conference on 3D Vision.

[25]  Juan R. Terven,et al.  A multiple camera calibration and point cloud fusion tool for Kinect V2 , 2017, Sci. Comput. Program..

[26]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[27]  Remo Sala,et al.  A metrological characterization of the Kinect V2 time-of-flight camera , 2016, Robotics Auton. Syst..

[28]  Roland Siegwart,et al.  Comparing ICP variants on real-world data sets , 2013, Auton. Robots.