Simple universal lossy data compression schemes derived from the Lempel-Ziv algorithm

Two universal lossy data compression schemes, one with fixed rate and the other with fixed distortion, are presented, based on the well-known Lempel-Ziv algorithm. In the case of fixed rate R, the universal lossy data compression scheme works as follows: first pick a codebook B/sub n/ consisting of all reproduction sequences of length n whose Lempel-Ziv codeword length is /spl les/nR, and then use B/sub n/ to encode the entire source sequence n-block by n-block. This fixed-rate data compression scheme is universal in the sense that for any stationary, ergodic source or for any individual sequence, the sample distortion performance as n/spl rarr//spl infin/ is given almost surely by the distortion rate function. A similar result is shown in the context of fixed distortion lossy source coding.

[1]  R. Gray Entropy and Information Theory , 1990, Springer New York.

[2]  Abraham Lempel,et al.  A universal algorithm for sequential data compression , 1977, IEEE Trans. Inf. Theory.

[3]  Peter Elias,et al.  Interval and recency rank source coding: Two on-line adaptive variable-length schemes , 1987, IEEE Trans. Inf. Theory.

[4]  John C. Kieffer,et al.  A survey of the theory of source coding with a fidelity criterion , 1993, IEEE Trans. Inf. Theory.

[5]  John C. Kieffer Fixed-rate encoding of nonstationary information sources , 1987, IEEE Trans. Inf. Theory.

[6]  Jacob Ziv,et al.  Distortion-rate theory for individual sequences , 1980, IEEE Trans. Inf. Theory.

[7]  J. Muramatsu,et al.  Distortion-complexity and rate-distortion function , 1994, Proceedings of 1994 IEEE International Symposium on Information Theory.

[8]  Yossef Steinberg,et al.  An algorithm for source coding subject to a fidelity criterion, based on string matching , 1993, IEEE Trans. Inf. Theory.

[9]  En-Hui Yang,et al.  Distortion program-size complexity with respect to a fidelity criterion and rate-distortion function , 1993, IEEE Trans. Inf. Theory.

[10]  Andrei N. Kolmogorov,et al.  Logical basis for information theory and probability theory , 1968, IEEE Trans. Inf. Theory.

[11]  Jorma Rissanen,et al.  Generalized Kraft Inequality and Arithmetic Coding , 1976, IBM J. Res. Dev..

[12]  Abraham Lempel,et al.  On the Complexity of Finite Sequences , 1976, IEEE Trans. Inf. Theory.

[13]  Lee D. Davisson,et al.  Universal noiseless coding , 1973, IEEE Trans. Inf. Theory.

[14]  Norman Abramson,et al.  Information theory and coding , 1963 .

[15]  Robert E. Tarjan,et al.  A Locally Adaptive Data , 1986 .

[16]  Abraham Lempel,et al.  Compression of individual sequences via variable-rate coding , 1978, IEEE Trans. Inf. Theory.

[17]  Jacob Ziv,et al.  Coding of sources with unknown statistics-II: Distortion relative to a fidelity criterion , 1972, IEEE Trans. Inf. Theory.

[18]  D. Ornstein,et al.  Universal Almost Sure Data Compression , 1990 .

[19]  John C. Kieffer,et al.  Sample converses in source coding theory , 1991, IEEE Trans. Inf. Theory.

[20]  Glen G. Langdon,et al.  Arithmetic Coding , 1979 .

[21]  Robert G. Gallager,et al.  Variations on a theme by Huffman , 1978, IEEE Trans. Inf. Theory.

[22]  A. Kolmogorov Three approaches to the quantitative definition of information , 1968 .

[23]  David L. Neuhoff,et al.  Fixed rate universal block source coding with a fidelity criterion , 1975, IEEE Trans. Inf. Theory.

[24]  R. G. Gallager,et al.  Coding of Sources With Unknown Statistics- Part II: Distortion Relative to a Fidelity Criterion , 1972 .