Inter-Block Redundancy Reduction in Vector-Quantized Images by a Neural Predictor
暂无分享,去创建一个
A prediction scheme is proposed, which allows one to exploit residual interblock correlation after vector quantization (VQ), without loss in the quality of reconstructed images.
Such a scheme is based on a Perceptron neural network, whose goal is to produce an estimation of a block, once the neighboring blocks have already been coded. The network is trained on a set of images by means of the classical back-propagation algorithm. The error to be minimized is the Euclidean distance (i.e., the mean square error, MSE) between the block generated by the network (using the four previously-coded neighboring blocks as input) and the current block.
During the encoding phase, for each block vector-quantized, the block produced by the network is used to generate a reduced codebook that, in the case of a correct prediction, allows one to use a smaller number of bits for the transmission of the block.
Experimental results show that the probability of a correct prediction ranges from 60 to 75 %, thus producing an increase in compression by a factor of 1.3 to 1.6. The main advantages of the presented approach lie in the efficiency of the neural predictor (even for images not included in the training set), and in reduced memory requirements in comparison with other methods previously proposed.
[1] R. Gray,et al. Vector quantization , 1984, IEEE ASSP Magazine.
[2] G. Vernazza,et al. A Framework For High-Compression Coding Of Color Images , 1989, Other Conferences.
[3] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.