Traditional multi-view coding (MVC) systems compress the texture content captured from different view points, where temporal and inter-view redundancy are exploited to improve MVC coding efficiency. The advanced 3D video coding systems compress both the texture content and its corresponding depth captured from different view points, known as multiview video plus depth (MVD), to support low complexity free view point applications. However, MVD systems consist of a large amount of data including both texture and depth to be compressed and transmitted. To improve the coding efficiency of MVD systems, view synthesis prediction (VSP) can be used to further reduce inter-view redundancy using synthetic views as predictors. In this paper, an in-loop view synthesis framework is proposed, where the synthesized predictor is encoded as a special motion compensated predictor and the motion information is encoded as one of the motion predictors in skip/merge candidate list for HEVC-based 3D video coding. The proposed scheme is applicable to both texture coding and depth coding. The experimental results show that the proposed framework improved the coding performance up to 12.1% for dependent views.
[1]
Gary J. Sullivan,et al.
Overview of the Stereo and Multiview Video Coding Extensions of the H.264/MPEG-4 AVC Standard
,
2011,
Proceedings of the IEEE.
[2]
Sehoon Yea,et al.
RD-Optimized View Synthesis Prediction for Multiview Video Coding
,
2007,
2007 IEEE International Conference on Image Processing.
[3]
Yo-Sung Ho,et al.
A framework of 3D video coding using view synthesis prediction
,
2012,
2012 Picture Coding Symposium.
[4]
Anthony Vetro,et al.
View Synthesis for Multiview Video Compression
,
2006
.
[5]
Hideaki Kimata,et al.
View Scalable Multiview Video Coding Using 3-D Warping With Depth Map
,
2007,
IEEE Transactions on Circuits and Systems for Video Technology.