On the Redundancy of Lossy Source Coding with Abstract Alphabets

The redundancy problem of lossy source coding with abstract source and reproduction alphabets is considered. For coding at a fixed rate level, it is shown that for any fixed rate R>0 and any memoryless abstract alphabet source P satisfying some mild conditions, there exists a sequence {C/sub n/}/sub n=1//sup /spl infin// of block codes at the rate R such that the distortion redundancy of C/sub n/ (defined as the difference between the performance of C/sub n/ and the distortion rate function d(P, R) of P) is upper-bounded by |(/spl part/d(P,R))/(/spl part/R)| ln n/2n+o(ln n/n). For coding at a fixed distortion level, it is demonstrated that for any d>0 and any memoryless abstract alphabet source P satisfying some mild conditions, there exists a sequence {C/sub n/}/sub n=1//sup /spl infin// of block codes at the fixed distortion d such that the rate redundancy of C/sub n/ (defined as the difference between the performance of C/sub n/ and the rate distortion function R(P,d) of P) is upper-bounded by (7ln n)/(6n)+o(ln n/n). These results strengthen the traditional Berger's (1968, 1971) abstract alphabet source coding theorem, and extend the positive redundancy results of Zhang, Yang, and Wei (see ibid., vol.43, no.1, p.71-91, 1997, and ibid., vol.42, p.803-21, 1996) on lossy source coding with finite alphabets and the redundancy result of Wyner (see ibid., vol.43, p.1452-64, 1997) on block coding of memoryless Gaussian sources.

[1]  Abraham J. Wyner The redundancy and distribution of the phrase lengths of the fixed-database Lempel-Ziv algorithm , 1997, IEEE Trans. Inf. Theory.

[2]  Zhen Zhang,et al.  The redundancy of source coding with a fidelity criterion: 1. Known statistics , 1997, IEEE Trans. Inf. Theory.

[3]  En-Hui Yang,et al.  On the redundancy of the fixed-database Lempel-Ziv algorithm for phi -mixing sources , 1997, IEEE Trans. Inf. Theory.

[4]  Jorma Rissanen,et al.  Complexity of strings in the class of Markov sources , 1986, IEEE Trans. Inf. Theory.

[5]  Jorma Rissanen,et al.  Universal coding, information, prediction, and estimation , 1984, IEEE Trans. Inf. Theory.

[6]  David L. Neuhoff,et al.  Fixed rate universal block source coding with a fidelity criterion , 1975, IEEE Trans. Inf. Theory.

[7]  Zhen Zhang,et al.  An On-Line Universal Lossy Data Compression Algorithm via Continuous Codebook Refinement - Part III: Redundancy Analysis , 1998, IEEE Trans. Inf. Theory.

[8]  D. Ornstein,et al.  Universal Almost Sure Data Compression , 1990 .

[9]  Toby Berger Rate Distortion Theory for Sources with Abstract Alphabets and Memory , 1968, Inf. Control..

[10]  Tamás Linder,et al.  On the cost of finite block length in quantizing unbounded memoryless sources , 1996, IEEE Trans. Inf. Theory.

[11]  Aaron D. Wyner,et al.  On the Transmission of Correlated Gaussian Data over a Noisy Channel with Finite Encoding Block Length , 1972, Inf. Control..

[12]  Robert M. Gray,et al.  Source coding theorems without the ergodic assumption , 1974, IEEE Trans. Inf. Theory.

[13]  Zhen Zhang,et al.  An on-line universal lossy data compression algorithm via continuous codebook refinement - Part II. Optimality for phi-mixing source models , 1996, IEEE Trans. Inf. Theory.

[14]  En-Hui Yang,et al.  On the Performance of Data Compression Algorithms Based Upon String Matching , 1998, IEEE Trans. Inf. Theory.

[15]  J. Rissanen Stochastic Complexity and Modeling , 1986 .

[16]  Marcelo J. Weinberger,et al.  Upper bounds on the probability of sequences emitted by finite-state sources and on the redundancy of the Lempel-Ziv algorithm , 1992, IEEE Trans. Inf. Theory.

[17]  Aaron D. Wyner,et al.  Improved redundancy of a version of the Lempel-Ziv algorithm , 1995, IEEE Trans. Inf. Theory.

[18]  Neri Merhav A comment on 'A rate of convergence result for a universal D-semifaithful code' , 1995, IEEE Trans. Inf. Theory.

[19]  Serap A. Savari,et al.  Redundancy of the Lempel-Ziv incremental parsing rule , 1997, IEEE Trans. Inf. Theory.

[20]  Michael B. Pursley,et al.  Variable-rate universal block source coding subject to a fidelity constraint , 1978, IEEE Trans. Inf. Theory.

[21]  P. Hall Rates of convergence in the central limit theorem , 1983 .

[22]  Wojciech Szpankowski,et al.  A suboptimal lossy data compression based on approximate pattern matching , 1997, IEEE Trans. Inf. Theory.

[23]  Guy Louchard,et al.  On the average redundancy rate of the Lempel-Ziv code , 1997, IEEE Trans. Inf. Theory.

[24]  Jacob Ziv,et al.  Coding of sources with unknown statistics-II: Distortion relative to a fidelity criterion , 1972, IEEE Trans. Inf. Theory.

[25]  James A. Bucklew,et al.  A large deviation theory proof of the abstract alphabet source coding theorem , 1988, IEEE Trans. Inf. Theory.

[26]  Lee D. Davisson,et al.  Universal noiseless coding , 1973, IEEE Trans. Inf. Theory.

[27]  Zhen Zhang,et al.  An on-line universal lossy data compression algorithm via continuous codebook refinement - Part I: Basic results , 1996, IEEE Trans. Inf. Theory.

[28]  A. Kolmogorov Three approaches to the quantitative definition of information , 1968 .

[29]  John C. Kieffer,et al.  Sample converses in source coding theory , 1991, IEEE Trans. Inf. Theory.

[30]  Michelle Effros,et al.  A vector quantization approach to universal noiseless coding and quantization , 1996, IEEE Trans. Inf. Theory.

[31]  R. Gray Entropy and Information Theory , 1990, Springer New York.

[32]  A. Wyner Communication of analog data from a Gaussian source over a noisy channel , 1968 .

[33]  R. Gallager Information Theory and Reliable Communication , 1968 .

[34]  John C. Kieffer,et al.  A unified approach to weak universal source coding , 1978, IEEE Trans. Inf. Theory.

[35]  Tamás Linder,et al.  Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding , 1994, IEEE Trans. Inf. Theory.

[36]  Raphail E. Krichevsky,et al.  The performance of universal encoding , 1981, IEEE Trans. Inf. Theory.

[37]  R. J. Pilc The transmission distortion of a source as a function of the encoding block length , 1968 .

[38]  Bin Yu,et al.  A rate of convergence result for a universal D-semifaithful code , 1993, IEEE Trans. Inf. Theory.

[39]  John C. Kieffer,et al.  A survey of the theory of source coding with a fidelity criterion , 1993, IEEE Trans. Inf. Theory.

[40]  Tamás Linder,et al.  Fixed-rate universal lossy source coding and rates of convergence for memoryless sources , 1995, IEEE Trans. Inf. Theory.

[41]  Peter Elias,et al.  Universal codeword sets and representations of the integers , 1975, IEEE Trans. Inf. Theory.

[42]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.