Block-based spatio-temporal prediction for video coding

This paper proposes a block-based spatio-temporal prediction method for video coding. In this method, a predicted value at each pel is generated by a linear 3D predictor which uses the causal neighborhood in both the current and motion-compensated previous frames. When the causal neighborhood is within the block to be predicted, previously predicted values instead of the reconstructed ones are recursively used. Therefore, it can be incorporated with DCT-based residual coding algorithms where the reconstructed values are obtained on a block-by-block basis. In order to minimize the sum of squared prediction errors, a set of 3D predictors is iteratively optimized using the quasi-Newton method. Simulation results indicate that joint use of spatio-temporal prediction attains higher PSNR than exclusive use of spatial or temporal prediction in a framework of the proposed method.