An on-line universal lossy data compression algorithm via continuous codebook refinement - Part II. Optimality for phi-mixing source models

For pt.I see ibid., vol.42, no.3, p.803-21 (1996). Two versions of the gold-washing data compression algorithm, one with codebook innovation interval and the other with finitely many codebook innovations, are considered. The version of the gold-washing algorithm with codebook innovation interval k is a variant of the gold-washing algorithm such that the codebook is innovated once every k+1 source words during the process of encoding the entire source. It is demonstrated that when this version of the gold-washing algorithm is applied to encode a stationary, /spl phi/-mixing source, the expected distortion performance converges to the distortion rate function of the source as the codebook length goes to infinity. Furthermore, if the source to be encoded is a Markov source or a finite-state source, then the corresponding sample distortion performance converges almost surely to the distortion rate function. The version of the gold-washing algorithm with finitely many codebook innovations is a variant of the gold-washing algorithm in which after finitely many codebook innovations, the codebook is held fixed and reused to encode the forthcoming source sequence block by block. Similar results are shown for this version of the gold-washing algorithm. In addition, the convergence speed of the algorithm is discussed.

[1]  Robert M. Gray,et al.  Ergodicity of Markov channels , 1987, IEEE Trans. Inf. Theory.

[2]  P. Gács,et al.  Bounds on conditional probabilities with applications in multi-user communication , 1976 .

[3]  P. Billingsley,et al.  Convergence of Probability Measures , 1969 .

[4]  D. Halverson,et al.  Discrete-time detection in Epsilon -mixing noise , 1980, IEEE Trans. Inf. Theory.

[5]  R. Gray,et al.  Asymptotically Mean Stationary Measures , 1980 .

[6]  J. Kieffer,et al.  Markov Channels are Asymptotically Mean Stationary , 1981 .

[7]  Marius Iosifescu,et al.  Finite Markov Processes and Their Applications , 1981 .

[8]  Neri Merhav,et al.  Estimating the number of states of a finite-state source , 1992, IEEE Trans. Inf. Theory.

[9]  R. Gray Entropy and Information Theory , 1990, Springer New York.

[10]  John C. Kieffer,et al.  Sample converses in source coding theory , 1991, IEEE Trans. Inf. Theory.

[11]  Hossam M. H. Shalaby,et al.  Error exponents for distributed detection of Markov sources , 1994, IEEE Trans. Inf. Theory.

[12]  En-Hui Yang,et al.  Simple universal lossy data compression schemes derived from the Lempel-Ziv algorithm , 1996, IEEE Trans. Inf. Theory.

[13]  P. C. Shields,et al.  Ergodic Processes And Zero Divergence , 1991, Proceedings. 1991 IEEE International Symposium on Information Theory.

[14]  Tamás Linder,et al.  Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding , 1994, IEEE Trans. Inf. Theory.

[15]  Katalin Marton,et al.  A simple proof of the blowing-up lemma , 1986, IEEE Trans. Inf. Theory.

[16]  Zhen Zhang,et al.  An on-line universal lossy data compression algorithm via continuous codebook refinement - Part I: Basic results , 1996, IEEE Trans. Inf. Theory.