Combining neural networks and the wavelet transform for image compression

The authors present a new image compression scheme which uses the wavelet transform and neural networks. Image compression is performed in three steps. First, the image is decomposed at different scales, using the wavelet transform, to obtain an orthogonal wavelet representation of the image. Second, the wavelet coefficients are divided into vectors, which are projected onto a subspace using a neural network. The number of coefficients required to represent the vector in the subspace is less than the number of coefficients required to represent the original vector, resulting in data compression. Finally, the coefficients which project the vectors of wavelet coefficients onto the subspace are quantized and entropy coded. The advantages of various quantization schemes are discussed. Using these techniques, a 32 to 1 compression at peak SNR of 29 dB was obtained.<<ETX>>

[1]  Paul Wintz,et al.  Digital image processing (2nd ed.) , 1987 .

[2]  Stéphane Mallat,et al.  Multifrequency channel decompositions of images and wavelet models , 1989, IEEE Trans. Acoust. Speech Signal Process..

[3]  Michel Barlaud,et al.  Image coding using wavelet transform , 1992, IEEE Trans. Image Process..

[4]  L. E. Russo,et al.  Image compression using an outer product neural network , 1992, [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[5]  Robert M. Gray,et al.  An Algorithm for Vector Quantizer Design , 1980, IEEE Trans. Commun..