Quad-tree based inter-view motion prediction

As a 3D video extension of Audio Video Coding Standard (AVS), 3D-AVS is being developed to improve the coding efficiency of multi-view video. Since multi-view video is composed of projections of the same scenery from different viewpoints at the same time instant, it contains a large amount of inter-view redundancies. To exploit the inter-view correlation, this paper presents a method to derive the motion parameters for a coding unit (CU) in the dependent view from the already coded inter-view picture. The algorithm is based on quad-tree partitioning, each CU could be recursively split into four sub-CUs of the same size and whether each sub-CU is further split is determined by comparing the derived motion parameters. Experimental results show that the proposed method provides 8.7% BD-rate saving for both video 1 and video 2 in low delay configuration and the BD-rate saving is up to 14.5% and 13.7% on video 1 and video 2 for Balloons. This method has been proposed and adopted into the 3D-AVS standard.

[1]  Lu Yu,et al.  Overview of AVS-video coding standards , 2009, Signal Process. Image Commun..

[2]  Hans-Peter Seidel,et al.  Free-viewpoint video of human actors , 2003, ACM Trans. Graph..

[3]  Wen Gao,et al.  Overview of IEEE 1857 video coding standard , 2013, 2013 IEEE International Conference on Image Processing.

[4]  Oliver Schreer,et al.  3D analysis and image-based rendering for immersive TV applications , 2002, Signal Process. Image Commun..