Fast encoding method for vector quantization based on subvector technique with a modified data structure

The encoding process of vector quantization (VQ) is a time bottleneck to its practical application. In order to speed up the process of VQ encoding, it is possible to estimate the Euclidean distance first with just a lighter computation to try to reject a candidate codeword. In order to estimate the Euclidean distance, appropriate features of a vector become necessary. By using the famous statistical features of the sum and variance for a k-dimensional vector and furthermore for its two corresponding (k/2)-dimensional subvectors, it is easy to estimate the Euclidean distance so as to reject most of the unlikely codewords for a certain input vector (Guan, L and Kamel, M., 1992; Lec, C.H. and Chen, L H., 1994; Baek, S. et al., 1997; Pan, J.S. et al., 2003). Because it is very heavy to compute the variance of a k-dimensional vector online, a new feature, which is based on the variances of two subvectors, is constructed to estimate the Euclidean distance. Meanwhile, a modified more memory-efficient data structure is proposed for storing all features of a vector to reduce extra memory requirement compared to the latest previous work (Pan, J.S. et al., 2003). Experimental results confirmed that the proposed method is more search efficient.