Joint texture-depth pixel inpainting of disocclusion holes in virtual view synthesis

Transmitting texture and depth maps from one or more reference views enables a user to freely choose virtual viewpoints from which to synthesize images for observation via depth-image-based rendering (DIBR). In each DIBR-synthesized image, however, there remain disocclusion holes with missing pixels corresponding to spatial regions occluded from view in the reference images. To complete these holes, unlike previous schemes that rely heavily (and unrealistically) on the availability of a high-quality depth map in the virtual view for inpainting of the corresponding texture map, in this paper a new Joint Texture-Depth Inpainting (JTDI) algorithm is proposed that simultaneously fill in missing texture and depth pixels. Specifically, we first use available partial depth information to compute priority terms to identify the next target pixel patch in a disocclusion hole for inpainting. Then, after identifying the best-matched texture patch in the known pixel region via template matching for texture inpainting, the variance of the corresponding depth patch is copied to the target depth patch for depth inpainting. Experimental results show that JTDI outperforms two previous inpainting schemes that either does not use available depth information during inpainting, or depends on the availability of a good depth map at the virtual view for good inpainting performance.

[1]  Toshiaki Fujii,et al.  Multipoint Measuring System for Video and Sound - 100-camera and microphone system , 2006, 2006 IEEE International Conference on Multimedia and Expo.

[2]  Christopher Joseph Pal,et al.  Learning Conditional Random Fields for Stereo , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[3]  Jiang Li,et al.  A real-time interactive multi-view video system , 2005, MULTIMEDIA '05.

[4]  Yo-Sung Ho,et al.  Hole filling method using depth based in-painting for view synthesis in free viewpoint television and 3-D video , 2009, 2009 Picture Coding Symposium.

[5]  S. Burak Gokturk,et al.  A Time-Of-Flight Depth Sensor - System Description, Issues and Solutions , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[6]  Leonard McMillan,et al.  Post-rendering 3D warping , 1997, SI3D.

[7]  Christine Guillemot,et al.  Examplar-based inpainting based on local geometry , 2011, 2011 18th IEEE International Conference on Image Processing.

[8]  Peter H. N. de With,et al.  Depth-guided inpainting algorithm for Free-Viewpoint Video , 2012, 2012 19th IEEE International Conference on Image Processing.

[9]  Aljoscha Smolic,et al.  Multi-View Video Plus Depth Representation and Coding , 2007, 2007 IEEE International Conference on Image Processing.

[10]  Béatrice Pesquet-Popescu,et al.  Depth-aided image inpainting for novel view synthesis , 2010, 2010 IEEE International Workshop on Multimedia Signal Processing.

[11]  Rachid Deriche,et al.  Vector-valued image regularization with PDEs: a common framework for different applications , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Dong Tian,et al.  View synthesis techniques for 3D video , 2009, Optical Engineering + Applications.

[13]  Changick Kim,et al.  Depth-Based Disocclusion Filling for Virtual View Synthesis , 2012, 2012 IEEE International Conference on Multimedia and Expo.

[14]  Gene Cheung,et al.  Arbitrarily shaped sub-block motion prediction in texture map compression using depth information , 2012, 2012 Picture Coding Symposium.

[15]  Patrick Pérez,et al.  Region filling and object removal by exemplar-based image inpainting , 2004, IEEE Transactions on Image Processing.