The context-tree weighting method: basic properties
暂无分享,去创建一个
We describe a sequential universal data compression procedure for binary tree sources that performs the “double mixture.” Using a context tree, this method weights in an efficient recursive way the coding distributions corresponding to all bounded memory tree sources, and achieves a desirable coding distribution for tree sources with an unknown model and unknown parameters. Computational and storage complexity of the proposed procedure are both linear in the source sequence length. We derive a natural upper bound on the cumulative redundancy of our method for individual sequences. The three terms in this bound can be identified as coding, parameter, and model redundancy. The bound holds for all source sequence lengths, not only for asymptotically large lengths. The analysis that leads to this bound is based on standard techniques and turns out to be extremely simple. Our upper bound on the redundancy shows that the proposed context-tree weighting procedure is optimal in the sense that it achieves the Rissanen (1984) lower bound.
[1] Meir Feder,et al. A universal finite memory source , 1995, IEEE Trans. Inf. Theory.
[2] N. Merhav,et al. Optimal sequential probability assignment for individual sequences , 1994, Proceedings of 1994 IEEE International Symposium on Information Theory.
[3] Richard Clark Pasco,et al. Source coding algorithms for fast data compression , 1976 .
[4] Frederick Jelinek,et al. Probabilistic Information Theory: Discrete and Memoryless Models , 1968 .