Tradeoffs between the excess-code-length exponent and the excess-distortion exponent in lossy source coding

Lossy compression of a discrete memoryless source (DMS) with respect to a single-letter distortion measure is considered. We study the best attainable tradeoff between the exponential rates of the probabilities that the codeword length and that the cumulative distortion exceed respective thresholds for two main cases. The first scenario examined is that where the source is corrupted by a discrete memoryless channel (DMC) prior to reaching the coder. In the second part of this work, we examine the universal setting, where the (noise-free) source is an unknown member P/sub /spl theta// of a given family {P/sub /spl theta//,/spl theta//spl isin//spl Theta/}. Here, inspired by an approach which was proven fruitful previously in the context of composite hypothesis testing, we allow the constraint on the excess-code-length exponent to be /spl theta/-dependent. Corollaries are derived for some special cases of interest, including Marton's (1974) classical source coding exponent and its generalization to the case where the constraint on the rate of the code is relaxed from an almost sure constraint to a constraint on the excess-code-length exponent.

[1]  Elza Erkip,et al.  The Efficiency of Investment Information , 1998, IEEE Trans. Inf. Theory.

[2]  Frederick Jelinek,et al.  Buffer overflow in variable length coding of fixed rate sources , 1968, IEEE Trans. Inf. Theory.

[3]  Neri Merhav,et al.  Optimal prefix codes for sources with two-sided geometric distributions , 2000, IEEE Trans. Inf. Theory.

[4]  Neri Merhav,et al.  Coding of sources with two-sided geometric distributions and unknown parameters , 2000, IEEE Trans. Inf. Theory.

[5]  Te Sun Han The reliability functions of the general source with fixed-length coding , 2000, IEEE Trans. Inf. Theory.

[6]  Toby Berger,et al.  Rate distortion theory : a mathematical basis for data compression , 1971 .

[7]  O. F. Cook The Method of Types , 1898 .

[8]  Ofer Zeitouni,et al.  On universal hypotheses testing via large deviations , 1991, IEEE Trans. Inf. Theory.

[9]  Richard E. Blahut,et al.  Hypothesis testing and information theory , 1974, IEEE Trans. Inf. Theory.

[10]  N. Merhav,et al.  A competitive Neyman-Pearson approach to universal hypothesis testing with applications , 2001, Proceedings. 2001 IEEE International Symposium on Information Theory (IEEE Cat. No.01CH37252).

[11]  Prakash Narayan,et al.  Error exponents for successive refinement by partitioning , 1996, IEEE Trans. Inf. Theory.

[12]  D. A. Bell,et al.  Information Theory and Reliable Communication , 1969 .

[13]  Imre Csisźar,et al.  The Method of Types , 1998, IEEE Trans. Inf. Theory.

[14]  A. Wyner On the Probability of Buffer Overflow Under an Arbitrary Bounded Input-Output Distribution , 1974 .

[15]  Richard E. Blahut Information bounds of the Fano-Kullback type , 1976, IEEE Trans. Inf. Theory.

[16]  Pierre A. Humblet Generalization of Huffman coding to minimize the probability of buffer overflow , 1981, IEEE Trans. Inf. Theory.

[17]  Katalin Marton,et al.  Error exponent for source coding with a fidelity criterion , 1974, IEEE Trans. Inf. Theory.

[18]  Neri Merhav,et al.  Universal coding with minimum probability of codeword length overflow , 1991, IEEE Trans. Inf. Theory.

[19]  Robert M. Gray,et al.  A unified approach for encoding clean and noisy sources by means of waveform and autoregressive model vector quantization , 1988, IEEE Trans. Inf. Theory.

[20]  Richard E. Blahut,et al.  Principles and practice of information theory , 1987 .