Vector quantization (VQ) is a well-known method for signal compression. One of the main problems remaining unsolved satisfactorily in a VQ compression system is its encoding speed, which seriously constrains the practical applications of the VQ method. The reason is that in its encoding process VQ must perform many k-dimensional (k-D) expensive Euclidean distance computations so as to determine a best-matched codeword in the codebook for the input vector by finding the minimum Euclidean distance. Apparently, the most straightforward method in a VQ framework is to deal with a k-D vector as a whole vector. By using the popular statistical features of the sum and the variance of a k-D vector to estimate real Euclidean distance first, the IEENNS method has been proposed to reject most of the unlikely candidate codewords for a given input vector. Because both the sum and the variance are approximate descriptions for a vector and they are more precise when representing a shorter vector, it is better to construct the partial sums and the partial variances by dealing with a k-D vector as two lower dimensional subvectors to replace the sum and the variance of a whole vector. Then, by equally dividing a k-D vector in half to generate its two corresponding (k/2)-D subvectors and applying the IEENNS method again to each of the subvectors, the IEENNS method has been proposed recently. The SIEENNS method is so far the most search-efficient subvector-based encoding method for VQ, but it still has a large memory and computational redundancy. This paper aims at improving the state-of-the-art SIEENNS method by (1) introducing a new 3-level data structure to reduce the memory redundancy; (2) avoiding using two partial variances of the two (k=2)-D subvectors to reduce the computational redundancy, and (3) combining two partial sums of the two (k=2)-D subvectors together to enhance the capability of the codeword rejection test. Experimental results confirmed that the proposed method in this paper can reduce the total memory requirement for each k-D vector from (k + 6) to (k + 1) and meanwhile remarkably improve the overall search efficiency to 72.3–81.1% compared to the SIEENNS method.
[1]
Koji Kotani,et al.
Fast Encoding Method for Image Vector Quantization Based on Multiple Appropriate Features to Estimate Euclidean Distance
,
2005
.
[2]
Nasser M. Nasrabadi,et al.
Image coding using vector quantization: a review
,
1988,
IEEE Trans. Commun..
[3]
Mohamed S. Kamel,et al.
Equal-average hyperplane partitioning method for vector quantization of image data
,
1992,
Pattern Recognit. Lett..
[4]
Jeng-Shyang Pan,et al.
An efficient encoding algorithm for vector quantization based on subvector technique
,
2003,
IEEE Trans. Image Process..
[5]
Koji Kotani,et al.
A unified projection method for fast search of vector quantization
,
2004,
IEEE Signal Processing Letters.
[6]
C.-H. Lee,et al.
Fast closest codeword search algorithm for vector quantization
,
1994
.
[7]
K. Sung,et al.
A fast encoding algorithm for vector quantization
,
1997,
IEEE Signal Process. Lett..