Virtual View Quality Enhancement Using Side View Information for Free Viewpoint Video

With the advancement of displaying technologies, virtual viewpoint video needs to be synthesized from adjacent viewpoints to provide immersive perceptual viewing experience of a scene. View synthesized techniques suffer poor rendering quality due to holes created by occlusion in the warping process. Currently, spatial and temporal correlation techniques are used to improve the quality of the synthesized view. However, spatial correlation e. g. inpainting and inverse mapping (IM) techniques cannot fill holes efficiently due to low spatial correlation in the edge between foreground and background pixels. On the other hand, the temporal correlation among already synthesized frames through learning by Gaussian mixture modelling (GMM) may fill occluded areas efficiently. However, there are no frames for GMM learning when the user switches view instantly. To address the aforementioned issues, in the proposed view synthesis technique, we apply GMM on the adjacent viewpoint videos. Then, we utilize the number of GMM models to refine pixel intensities of the synthesized view by using a weighting factor between pixel intensities in GMM models and warped images. This technique provides a better pixel correspondence, which improves 0.47~0.58dB PSNR compared to the IM technique.

[1]  Manoranjan Paul,et al.  Efficient multi-view video coding using 3D motion estimation and virtual frame , 2016, Neurocomputing.

[2]  Marco Grangetto,et al.  Depth image based rendering with inverse mapping , 2013, 2013 IEEE 15th International Workshop on Multimedia Signal Processing (MMSP).

[3]  Manoranjan Paul,et al.  Improved Gaussian mixtures for robust object detection by adaptive multi-background generation , 2008, 2008 19th International Conference on Pattern Recognition.

[4]  Manoranjan Paul,et al.  Virtual View Synthesis for Free Viewpoint Video and Multiview Video Compression using Gaussian Mixture Modelling , 2018, IEEE Transactions on Image Processing.

[5]  Patrick Pérez,et al.  Region filling and object removal by exemplar-based image inpainting , 2004, IEEE Transactions on Image Processing.

[6]  Manoranjan Paul,et al.  A Novel Virtual View Quality Enhancement Technique through a Learning of Synthesised Video , 2017, 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA).

[7]  Olivier Déforges,et al.  NIQSV+: A No-Reference Synthesized View Quality Assessment Metric , 2018, IEEE Transactions on Image Processing.

[8]  Manoranjan Paul,et al.  Adaptive weighting between warped and learned foregrounds for view synthesize , 2017, 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW).

[9]  Dar-Shyang Lee,et al.  Effective Gaussian mixture learning for video background subtraction , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  Manoranjan Paul,et al.  Free view-point video synthesis using Gaussian Mixture Modelling , 2015, 2015 International Conference on Image and Vision Computing New Zealand (IVCNZ).

[11]  Ying Chen,et al.  Overview of the Multiview and 3D Extensions of High Efficiency Video Coding , 2016, IEEE Transactions on Circuits and Systems for Video Technology.

[12]  Marcelo Walter,et al.  Selective hole-filling for depth-image based rendering , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[13]  Manoranjan Paul,et al.  Hole-filling for single-view plus-depth based rendering with temporal texture synthesis , 2016, 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW).

[14]  Yao Zhao,et al.  Depth Map Driven Hole Filling Algorithm Exploiting Temporal Correlation Information , 2014, IEEE Transactions on Broadcasting.

[15]  Manoranjan Paul,et al.  View Synthesised Prediction with Temporal Texture Synthesis for Multi-View Video , 2016, 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA).