Inter-view consistent hole filling in view extrapolation for multi-view image generation

This paper proposes a new inter-view consistent hole filling method in view extrapolation for multi-view image generation. In stereopsis, inter-view consistency regarding structure, color, and luminance is one of the crucial factors that affect the overall viewing quality of three-dimensional image contents. In particular, the inter-view inconsistency could induce visual stress on the human visual system. To ensure the inter-view consistency, the proposed method suggests a hole filling method in an order from the nearest to farthest view to the reference view by propagating the filled color information in the preceding view. In addition, a novel depth map filling method is incorporated to achieve the inter-view consistency. Experimental results show that the proposed method significantly improves the inter-view consistency for multiview images and depth maps, compared to those of previous methods.

[1]  Xinhui Li,et al.  Fill in Occlusion Regions on Remotely Sensed Images Using Texture Synthesis Technique , 2011, 2011 International Conference on Internet Computing and Information Services.

[2]  Patrick Pérez,et al.  Region filling and object removal by exemplar-based image inpainting , 2004, IEEE Transactions on Image Processing.

[3]  Alexander Toet,et al.  Visual comfort of binocular and 3D displays , 2004 .

[4]  Thomas Wiegand,et al.  Depth Image-Based Rendering With Advanced Texture Synthesis for 3-D Video , 2010, IEEE Transactions on Multimedia.

[5]  Yong Man Ro,et al.  Quantitative measurement of binocular color fusion limit for non-spectral colors. , 2011, Optics express.

[6]  Hailing Zhou,et al.  Adaptive patch size determination for patch-based image completion , 2010, 2010 IEEE International Conference on Image Processing.

[7]  Yo-Sung Ho,et al.  Hole filling method using depth based in-painting for view synthesis in free viewpoint television and 3-D video , 2009, 2009 Picture Coding Symposium.

[8]  Shang-Hong Lai,et al.  Spatio-Temporally Consistent Novel View Synthesis Algorithm From Video-Plus-Depth Sequences for Autostereoscopic Displays , 2011, IEEE Transactions on Broadcasting.

[9]  Casper J. Erkelens,et al.  The Role of Temporally Coarse Form Processing during Binocular Rivalry , 2008, PloS one.

[10]  Yong Man Ro,et al.  Effect of Stimulus Width on the Perceived Visual Discomfort in Viewing Stereoscopic 3-D-TV , 2013, IEEE Transactions on Broadcasting.

[11]  Changick Kim,et al.  Depth-Based Disocclusion Filling for Virtual View Synthesis , 2012, 2012 IEEE International Conference on Multimedia and Expo.

[12]  Yong Ju Jung,et al.  Depth image‐based rendering for multiview generation , 2010 .

[13]  Truong Q. Nguyen,et al.  View synthesis based on Conditional Random Fields and graph cuts , 2010, 2010 IEEE International Conference on Image Processing.

[14]  Shang-Hong Lai,et al.  Spatio-Temporally Consistent View Synthesis From Video-Plus-Depth Data With Global Optimization , 2014, IEEE Transactions on Circuits and Systems for Video Technology.

[15]  Christoph Fehn,et al.  Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV , 2004, IS&T/SPIE Electronic Imaging.

[16]  Hideo Saito,et al.  A Novel Inpainting-Based Layered Depth Video for 3DTV , 2011, IEEE Transactions on Broadcasting.