RGBD-Fusion: Depth Refinement for Diffuse and Specular Objects

The popularity of low-cost RGB-D scanners is increasing on a daily basis and has set off a major boost in 3D computer vision research. Nevertheless, commodity scanners often cannot capture subtle details in the environment. In other words, the precision of existing depth scanners is often not accurate enough to recover fine details of scanned objects. In this chapter, we review recent axiomatic methods to enhance the depth map by fusing the intensity and depth information to create detailed range profiles. We present a novel shape-from-shading framework that enhances the quality of recovery of diffuse and specular objects’ depth profiles. The first shading-based depth refinement method we review is designed to work well with Lambertian objects, however, it breaks down in the presence of specularities. To that end, we propose a second method, which utilizes the properties of the built-in monochromatic IR projector and the acquired IR images of common RGB-D scanners and propose a lighting model that accounts for the specular regions in the input image. In the methods suggested above, the detailed geometry is calculated without the need to explicitly find and integrate surface normals, this allows the numerical implementations to work in real-time. Finally, we also show how we can leverage deep learning to refine depth details. We present a neural network that is trained with the above models and can be naturally integrated as part of a larger network architecture. Both quantitative tests and visual evaluations prove that the suggested methods produce state-of-the-art depth reconstruction results.