Rate Distortion Function for a Class of Relative Entropy Sources

This paper deals with rate distortion or source coding with fidelity criterion, in measure spaces, for a class of source distributions. The class of source distributions is described by a relative entropy constraint set between the true and a nominal distribution. The rate distortion problem for the class is thus formulated and solved using minimax strategies, which result in robust source coding with fidelity criterion. It is shown that minimax and maxmin strategies can be computed explicitly, and they are generalizations of the classical solution. Finally, for discrete memoryless uncertain sources, the rate distortion theorem is stated for the class omitting the derivations while the converse is derived. I. INTRODUCTION This paper is concerned with lossy data compression for a class of sources defined on the space of probability distributions on general alphabet spaces. In the classical rate distortion formulation with the fidelity decoding criterion, Shannon has shown that minimization of mutual information between finite alphabet source and reproduction sequences subject to fidelity criterion over the reproduction kernel has an operational meaning. Hence, it gives the minimum amount of information of representing a source symbol by a reproduction symbol with a pre-specified fidelity or distortion criterion.

[1]  K Fan,et al.  Minimax Theorems. , 1953, Proceedings of the National Academy of Sciences of the United States of America.

[2]  Robert M. Gray,et al.  Source coding theorems without the ergodic assumption , 1974, IEEE Trans. Inf. Theory.

[3]  Zhen Zhang,et al.  On the Redundancy of Lossy Source Coding with Abstract Alphabets , 1999, IEEE Trans. Inf. Theory.

[4]  R. Gray Entropy and Information Theory , 1990, Springer New York.

[5]  David L. Neuhoff,et al.  Robust source coding of weakly compact classes , 1987, IEEE Trans. Inf. Theory.

[6]  Michael B. Pursley,et al.  Source Coding Theorems for Stationary, Continuous-time Stochastic Processes , 1977 .

[7]  Laurence B. Wolfe On calculating Sakrison's rate distortion function for classes of parameterized sources , 1995, IEEE Trans. Inf. Theory.

[8]  D. Luenberger Optimization by Vector Space Methods , 1968 .

[9]  J. Lynch,et al.  A weak convergence approach to the theory of large deviations , 1997 .

[10]  David L. Neuhoff,et al.  Strong universal source coding subject to a rate-distortion constraint , 1982, IEEE Trans. Inf. Theory.

[11]  Richard E. Blahut,et al.  Principles and practice of information theory , 1987 .

[12]  Toby Berger,et al.  Rate distortion theory : a mathematical basis for data compression , 1971 .

[13]  Richard E. Blahut,et al.  Computation of channel capacity and rate-distortion functions , 1972, IEEE Trans. Inf. Theory.

[14]  DAVID J. SAKRISON,et al.  The Rate Distortion Function for a Class of Sources , 1969, Inf. Control..

[15]  N. Ahmed,et al.  Rate Distortion Theory for General Sources With Potential Application to Image Compression , 2006 .

[16]  James A. Bucklew,et al.  A large deviation theory proof of the abstract alphabet source coding theorem , 1988, IEEE Trans. Inf. Theory.

[17]  Amir Dembo,et al.  Source coding, large deviations, and approximate pattern matching , 2001, IEEE Trans. Inf. Theory.

[18]  Toby Berger,et al.  Lossy Source Coding , 1998, IEEE Trans. Inf. Theory.