KinectFusion: real-time dynamic 3D surface reconstruction and interaction

We present KinectFusion, a system that takes live depth data from a moving Kinect camera and in real-time creates high-quality, geometrically accurate, 3D models. Our system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3D model of the whole room and its contents within seconds. Even small motions, caused for example by camera shake, lead to new viewpoints of the scene and thus refinements of the 3D model, similar to the effect of image super-resolution. As the camera is moved closer to objects in the scene more detail can be added to the acquired 3D model.

[1]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Marc Levoy,et al.  A volumetric method for building complex models from range images , 1996, SIGGRAPH.