KLT-based adaptive vector quantization using PCNN

This paper proposes a KLT-CVQ (Karhunen-Loeve transform-classified vector quantization) using a PCNN (principal component neural network) to improve the quality of the reconstructed images at a given bit rate. By using the PCNN and classified vector quantization, we can exploit the high energy compaction and complete decorrelation capabilities of the KLT and the advantages of the VQ over SQ to improve the performance of the proposed hybrid coding technique. To preserve the perceptual features such as the edge components in the reconstructed images, we classified the input image blocks according to the texture energy measures of the local statistics and then vector-coded them adaptively to reduce the possible edge degradation in the reconstructed images. The results of the computer simulations show that the performance of the proposed KLT-CVQ is higher than that of the KLT-CSQ (classified scalar quantization) or the DCT (discrete cosine transform)-CVQ in the quality of the reconstructed images at a given bit rate.

[1]  Michael Unser,et al.  Multiresolution Feature Extraction and Selection for Texture Segmentation , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  S.Y. Kung,et al.  Adaptive Principal component EXtraction (APEX) and applications , 1994, IEEE Trans. Signal Process..

[3]  JongWon Kim,et al.  A transform domain classified vector quantizer for image coding , 1992, IEEE Trans. Circuits Syst. Video Technol..

[4]  J. Makhoul,et al.  Vector quantization in speech coding , 1985, Proceedings of the IEEE.

[5]  Azriel Rosenfeld,et al.  Experiments with texture classification using averages of local pattern matches , 1981, IEEE Transactions on Systems, Man, and Cybernetics.

[6]  Robert M. Gray,et al.  High-resolution quantization theory and the vector quantizer advantage , 1989, IEEE Trans. Inf. Theory.

[7]  Simon Haykin,et al.  Optimally adaptive transform coding , 1995, IEEE Trans. Image Process..