A Self-controlled Incremental Method for Vector Quantization

A new vector quantization method is proposed which generates codebooks incrementally. New vectors are inserted in areas of the input vector space where the quantization error is highest until the desired error threshold is reached. After the desired error threshold is reached, a remove-insert phase fine tunes the codebook. The proposed method can (1) solve the main shortcoming of traditional vector quantization LBG algorithm: the dependence on initial conditions; (2) work better than some recently published efficient algorithm such as Enhanced LBG (Patane 2001) for the traditional task: with fixed number of codewords, to find a suitable codebook to minimize distortion error. (3) work for new task that is not solved with traditional methods: with fixed distortion error, to minimize the number of codewords and find a suitable codebook. By solving some image compression problems, a comparison with ELBG was performed. The results indicate that the new method is significantly better than ELBG.

[1]  Giuseppe Patanè,et al.  The enhanced LBG algorithm , 2001, Neural Networks.

[2]  Joel Max,et al.  Quantizing for minimum distortion , 1960, IRE Trans. Inf. Theory.

[3]  Nikos A. Vlassis,et al.  The global k-means clustering algorithm , 2003, Pattern Recognit..

[4]  Bernd Fritzke,et al.  The LBG-U Method for Vector Quantization – an Improvement over LBG Inspired from Neural Networks , 1997, Neural Processing Letters.

[5]  Robert M. Gray,et al.  An Algorithm for Vector Quantizer Design , 1980, IEEE Trans. Commun..

[6]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[7]  R. Gray,et al.  Vector quantization , 1984, IEEE ASSP Magazine.

[8]  Vladimir Cherkassky,et al.  Learning from Data: Concepts, Theory, and Methods , 1998 .