The video plus depth format, which is composed of the texture video and the depth video, has been widely used for free viewpoint TV. However, the temporal inconsistency is often encountered in the depth video due to the error incurred in the estimation of the depth values. This will inevitably deteriorate the coding efficiency of depth video and the visual quality of synthesized view. To address this problem, a content-adaptive temporal consistency enhancement (CTCE) algorithm for the depth video is proposed in this paper, which consists of two sequential stages: (1) classification of stationary and non-stationary regions based on the texture video, and (2) adaptive temporal consistency filtering on the depth video. The result of the first stage is used to steer the second stage so that the filtering process will be conducted in an adaptive manner. Extensive experimental results have shown that the proposed CTCE algorithm can effectively mitigate the temporal inconsistency in the original depth video and consequently improve the coding efficiency of depth video and the visual quality of synthesized view.
[1]
G. Bjontegaard,et al.
Calculation of Average PSNR Differences between RD-curves
,
2001
.
[2]
Christoph Fehn,et al.
Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV
,
2004,
IS&T/SPIE Electronic Imaging.
[3]
Yo-Sung Ho,et al.
Temporally Consistent Depth Map Estimation Using Motion Estimation for 3 DTV
,
2009
.
[4]
Toshiaki Fujii,et al.
Free-Viewpoint TV
,
2011,
IEEE Signal Processing Magazine.
[5]
Minh N. Do,et al.
Depth Video Enhancement Based on Weighted Mode Filtering
,
2012,
IEEE Transactions on Image Processing.
[6]
Marc Pollefeys,et al.
Temporally Consistent Reconstruction from Multiple Video Streams Using Enhanced Belief Propagation
,
2007,
2007 IEEE 11th International Conference on Computer Vision.