Three-Dimensional Dense Study Based on Kinect

With the continuous improvement of image processing technology as well as the rapid development of camera hardware, scene reconstruction has attracted more and more research attention in our production and life. Currently, the scene sensing technology based on two-dimensional RGB is quite mature. However, it is difficult to accurately perceive the scene only by RGB information in the complex scenes, and the spatial position of the object is also an extremely important factor to describe the scene. This paper adopts consumer-grade Kinect depth camera and high-resolution two-dimensional color camera to form a composite vision system, registering and fusing spatial position information and color RGB information to obtain as much scene information as possible. In contrast to other 3D reconstruction techniques, this paper provides more color information.

[1]  Andrew J. Davison,et al.  Real-time simultaneous localisation and mapping with a single camera , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[2]  John J. Leonard,et al.  Robust real-time visual odometry for dense RGB-D mapping , 2013, 2013 IEEE International Conference on Robotics and Automation.

[3]  Andrew W. Fitzgibbon,et al.  KinectFusion: Real-time dense surface mapping and tracking , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[4]  Aggelos K. Katsaggelos,et al.  Automatic, fast, online calibration between depth and color cameras , 2014, J. Vis. Commun. Image Represent..

[5]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Matthias Nießner,et al.  BundleFusion , 2016, TOGS.