The redundancy of source coding with a fidelity criterion: 1. Known statistics

The problem of redundancy of source coding with respect to a fidelity criterion is considered. For any fixed rate R>0 and any memoryless source with finite source and reproduction alphabets and a common distribution p, the nth-order distortion redundancy D/sub n/(R) of fixed-rate coding is defined as the minimum of the difference between the expected distortion per symbol of any block code with length n and rate R and the distortion rate function d(p,R) of the source p. It is demonstrated that for sufficiently large n, D/sub n/(R) is equal to -(/spl part///spl part/R)d(p,R) ln n/2n+o(ln n/n), where (/spl part///spl part/R)d(p,R) is the partial derivative of d(p,R) evaluated at R and assumed to exist. For any fixed distortion level d>0 and any memoryless source p, the nth-order rate redundancy R/sub n/(d) of coding at fixed distortion level d (or by using d-semifaithful codes) is defined as the minimum of the difference between the expected rate per symbol of any d-semifaithful code of length n and the rate-distortion function R(p,d) of p evaluated at d. It is proved that for sufficiently large n, R/sub n/(d) is upper-bounded by ln n/n+o(ln n/n) and lower-bounded by In n/2n+o(In n/n). As a by-product, the lower bound of R/sub n/(d) derived in this paper gives a positive answer to a conjecture proposed by Yu and Speed (1993).

[1]  Robert B. Ash,et al.  Information Theory , 2020, The SAGE International Encyclopedia of Mass Media and Society.

[2]  J. Rissanen Stochastic Complexity and Modeling , 1986 .

[3]  Jorma Rissanen,et al.  Density estimation by stochastic complexity , 1992, IEEE Trans. Inf. Theory.

[4]  Katalin Marton,et al.  A simple proof of the blowing-up lemma , 1986, IEEE Trans. Inf. Theory.

[5]  Marcelo J. Weinberger,et al.  Upper bounds on the probability of sequences emitted by finite-state sources and on the redundancy of the Lempel-Ziv algorithm , 1992, IEEE Trans. Inf. Theory.

[6]  Peter Elias,et al.  Universal codeword sets and representations of the integers , 1975, IEEE Trans. Inf. Theory.

[7]  Zhen Zhang,et al.  An on-line universal lossy data compression algorithm via continuous codebook refinement - Part I: Basic results , 1996, IEEE Trans. Inf. Theory.

[8]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.

[9]  Andrew R. Barron,et al.  Information-theoretic asymptotics of Bayes methods , 1990, IEEE Trans. Inf. Theory.

[10]  Frederick Jelinek,et al.  On variable-length-to-block coding , 1972, IEEE Trans. Inf. Theory.

[11]  Jorma Rissanen,et al.  Complexity of strings in the class of Markov sources , 1986, IEEE Trans. Inf. Theory.

[12]  R. Gray,et al.  A Generalization of Ornstein's $\bar d$ Distance with Applications to Information Theory , 1975 .

[13]  D. Ornstein,et al.  Universal Almost Sure Data Compression , 1990 .

[14]  Mark H. A. Davis,et al.  Strong Consistency of the PLS Criterion for Order Determination of Autoregressive Processes , 1989 .

[15]  Lee D. Davisson,et al.  Minimax noiseless universal coding for Markov sources , 1983, IEEE Trans. Inf. Theory.

[16]  Neri Merhav A comment on 'A rate of convergence result for a universal D-semifaithful code' , 1995, IEEE Trans. Inf. Theory.

[17]  R. Gray Entropy and Information Theory , 1990, Springer New York.

[18]  Jorma Rissanen,et al.  Universal coding, information, prediction, and estimation , 1984, IEEE Trans. Inf. Theory.

[19]  John C. Kieffer,et al.  A unified approach to weak universal source coding , 1978, IEEE Trans. Inf. Theory.

[20]  Robert M. Gray,et al.  Source coding theorems without the ergodic assumption , 1974, IEEE Trans. Inf. Theory.

[21]  Zhen Zhang,et al.  An on-line universal lossy data compression algorithm via continuous codebook refinement - Part II. Optimality for phi-mixing source models , 1996, IEEE Trans. Inf. Theory.

[22]  Tamás Linder,et al.  Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding , 1994, IEEE Trans. Inf. Theory.

[23]  P. Gács,et al.  Bounds on conditional probabilities with applications in multi-user communication , 1976 .

[24]  Andrew R. Barron,et al.  Minimum complexity density estimation , 1991, IEEE Trans. Inf. Theory.

[25]  Raphail E. Krichevsky,et al.  The performance of universal encoding , 1981, IEEE Trans. Inf. Theory.

[26]  Lee D. Davisson,et al.  Universal noiseless coding , 1973, IEEE Trans. Inf. Theory.

[27]  David L. Neuhoff,et al.  Fixed rate universal block source coding with a fidelity criterion , 1975, IEEE Trans. Inf. Theory.

[28]  En-Hui Yang,et al.  Simple universal lossy data compression schemes derived from the Lempel-Ziv algorithm , 1996, IEEE Trans. Inf. Theory.

[29]  E. J. Hannan,et al.  REGRESSION, AUTOREGRESSION MODELS , 1986 .

[30]  Michael B. Pursley,et al.  Variable-rate universal block source coding subject to a fidelity constraint , 1978, IEEE Trans. Inf. Theory.

[31]  R. J. Pilc The transmission distortion of a source as a function of the encoding block length , 1968 .

[32]  Bin Yu,et al.  A rate of convergence result for a universal D-semifaithful code , 1993, IEEE Trans. Inf. Theory.