Second-order coding region for the discrete successive refinement source coding problem

We derive the optimal second-order coding region for the discrete successive refinement source coding problem under the joint excess-distortion event. To do so, we define a generalization of the tilted information density and leverage its properties. In the achievability part, we make use of type covering lemmas by Kanlis and Narayan (1996) and by No, Ingber and Weissman (2015). In the converse proof, we make use of the perturbation approach by Gu and Effros (2009). We also specialize our results to successively refinable sources and provide an alternative converse proof for such sources by generalizing Kostina and Verdú's (2012) one-shot converse bound for point-to-point lossy source coding.

[1]  Tsachy Weissman,et al.  Strong Successive Refinability and Rate-Distortion-Complexity Tradeoff , 2015, IEEE Transactions on Information Theory.

[2]  Shun Watanabe,et al.  Second-Order Region for Gray–Wyner Network , 2015, IEEE Transactions on Information Theory.

[3]  Michelle Effros,et al.  A strong converse for a collection of network source coding problems , 2009, 2009 IEEE International Symposium on Information Theory.

[4]  William Equitz,et al.  Successive refinement of information , 1991, IEEE Trans. Inf. Theory.

[5]  V. Bentkus On the dependence of the Berry–Esseen bound on dimension , 2003 .

[6]  Prakash Narayan,et al.  Error exponents for successive refinement by partitioning , 1996, IEEE Trans. Inf. Theory.

[7]  Vincent Yan Fu Tan,et al.  Nonasymptotic and Second-Order Achievability Bounds for Coding With Side-Information , 2013, IEEE Transactions on Information Theory.

[8]  L. Campbell,et al.  A Type Covering Lemma and the Excess Distortion Exponent for Coding Memoryless Laplacian Sources , 2006, 23rd Biennial Symposium on Communications, 2006.

[9]  Vincent Y. F. Tan,et al.  On the dispersions of three network information theory problems , 2012, 2012 46th Annual Conference on Information Sciences and Systems (CISS).

[10]  Tetsunao Matsuta,et al.  国際会議開催報告:2013 IEEE International Symposium on Information Theory , 2013 .

[11]  Kenneth Rose,et al.  Error exponents in scalable source coding , 2003, IEEE Trans. Inf. Theory.

[12]  Victoria Kostina,et al.  Lossy data compression: Nonasymptotic fundamental limits , 2013 .

[13]  Yuval Kochman,et al.  The Dispersion of Lossy Source Coding , 2011, 2011 Data Compression Conference.

[14]  Bixio Rimoldi,et al.  Successive refinement of information: characterization of the achievable rates , 1994, IEEE Trans. Inf. Theory.

[15]  Abbas El Gamal,et al.  Network Information Theory , 2021, 2021 IEEE 3rd International Conference on Advanced Trends in Information Theory (ATIT).

[16]  Shunsuke Ihara Error Exponent for Coding of Memoryless Gaussian Sources with a Fidelity Criterion , 2000 .

[17]  Vincent Yan Fu Tan,et al.  Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities , 2014, Found. Trends Commun. Inf. Theory.

[18]  Sergio Verdú,et al.  Fixed-Length Lossy Compression in the Finite Blocklength Regime , 2011, IEEE Transactions on Information Theory.

[19]  Sergio Verdú,et al.  A new converse in rate-distortion theory , 2012, 2012 46th Annual Conference on Information Sciences and Systems (CISS).

[20]  Amin Gohari,et al.  A technique for deriving one-shot achievability results in network information theory , 2013, 2013 IEEE International Symposium on Information Theory.

[21]  Thomas M. Cover,et al.  Network Information Theory , 2001 .

[22]  Vincent Yan Fu Tan,et al.  Second-Order Coding Rates for Channels With State , 2014, IEEE Transactions on Information Theory.