On the dispersions of three network information theory problems

We characterize fundamental limits for the Slepian-Wolf problem, the multiple-access channel and the asymmetric broadcast channel in the finite blocklength setting. For the Slepian-Wolf problem (distributed lossless source coding), we introduce a fundamental quantity known as the entropy dispersion matrix. We show that if this matrix is positive-definite, the optimal rate region under the constraint of a fixed blocklength and non-zero error probability has a curved boundary compared to being polyhedral for the asymptotic Slepian-Wolf scenario. In addition, the entropy dispersion matrix governs the rate of convergence of the non-asymptotic region to the asymptotic one. We develop a general universal achievability procedure for finite blocklength analyses of other network information theory problems such as the multiple-access channel and broadcast channel. We provide inner bounds to these problems using a key result known as the vector rate redundancy theorem which is proved using a multidimensional version of the Berry-Essèen theorem. We show that a so-called information dispersion matrix characterizes these inner bounds.

[1]  Aaron B. Wagner,et al.  Error exponents and test channel optimization for the Gaussian Wyner-Ziv problem , 2008, 2008 IEEE International Symposium on Information Theory.

[2]  William Feller,et al.  An Introduction to Probability Theory and Its Applications , 1951 .

[3]  Ioannis Kontoyiannis Second-order noiseless source coding theorems , 1997, IEEE Trans. Inf. Theory.

[4]  Richard G. Baraniuk,et al.  Non-Asymptotic Performance of Symmetric Slepian-Wolf Coding , 2005 .

[5]  Igal Sason,et al.  On Refined Versions of the Azuma-Hoeffding Inequality with Applications in Information Theory , 2011, ArXiv.

[6]  S. Sarvotham,et al.  Variable-Rate Universal Slepian-Wolf Coding with Feedback , 2005, Conference Record of the Thirty-Ninth Asilomar Conference onSignals, Systems and Computers, 2005..

[7]  Igal Sason,et al.  Moderate deviations analysis of binary hypothesis testing , 2011, 2012 IEEE International Symposium on Information Theory Proceedings.

[8]  Jun Chen,et al.  On the Redundancy of Slepian–Wolf Coding , 2009, IEEE Transactions on Information Theory.

[9]  Yuval Kochman,et al.  The Dispersion of Lossy Source Coding , 2011, 2011 Data Compression Conference.

[10]  Thomas M. Cover,et al.  Network Information Theory , 2001 .

[11]  János Körner,et al.  General broadcast channels with degraded message sets , 1977, IEEE Trans. Inf. Theory.

[12]  Patrick P. Bergmans,et al.  Random coding theorem for broadcast channels with degraded components , 1973, IEEE Trans. Inf. Theory.

[13]  Masahito Hayashi,et al.  Information Spectrum Approach to Second-Order Coding Rate in Channel Coding , 2008, IEEE Transactions on Information Theory.

[14]  V. Bentkus On the dependence of the Berry–Esseen bound on dimension , 2003 .

[15]  R. Gallager Information Theory and Reliable Communication , 1968 .

[16]  H. Vincent Poor,et al.  Dispersion of the Gilbert-Elliott Channel , 2011, IEEE Trans. Inf. Theory.

[17]  G. D. J. Forney,et al.  On exponential error bounds for random codes on the BSC , 2001 .

[18]  Sergio Verdú,et al.  Fixed-Length Lossy Compression in the Finite Blocklength Regime , 2011, IEEE Transactions on Information Theory.

[19]  M. Feder,et al.  Finite blocklength coding for channels with side information at the receiver , 2010, 2010 IEEE 26-th Convention of Electrical and Electronics Engineers in Israel.

[20]  Jack K. Wolf,et al.  Noiseless coding of correlated information sources , 1973, IEEE Trans. Inf. Theory.

[21]  Te Sun Han,et al.  A new achievable rate region for the interference channel , 1981, IEEE Trans. Inf. Theory.

[22]  Mehul Motani,et al.  On The Han–Kobayashi Region for theInterference Channel , 2008, IEEE Transactions on Information Theory.

[23]  Thomas H. Cormen,et al.  Introduction to algorithms [2nd ed.] , 2001 .

[24]  Thomas M. Cover,et al.  A Proof of the Data Compression Theorem of Slepian and Wolf for Ergodic Sources , 1971 .

[25]  Sergio Verdú,et al.  Fixed-length lossy compression in the finite blocklength regime: Discrete memoryless sources , 2011, 2011 IEEE International Symposium on Information Theory Proceedings.

[26]  Thomas M. Cover,et al.  Comments on Broadcast Channels , 1998, IEEE Trans. Inf. Theory.

[27]  Hiroki Koga,et al.  Information-Spectrum Methods in Information Theory , 2002 .

[28]  Udo Augustin,et al.  GedÄchtnisfreie KanÄle für diskrete Zeit , 1966 .

[29]  F. Götze On the Rate of Convergence in the Multivariate CLT , 1991 .

[30]  Vincent Y. F. Tan,et al.  On the dispersions of three network information theory problems , 2012, CISS.

[31]  Xin-She Yang,et al.  Introduction to Algorithms , 2021, Nature-Inspired Optimization Algorithms.

[32]  Himanshu Tyagi,et al.  The Gelfand-Pinsker channel: Strong converse and upper bound for the reliability function , 2009, 2009 IEEE International Symposium on Information Theory.

[33]  Richard G. Baraniuk,et al.  Redundancy Rates of Slepian-Wolf Coding ∗ , 2004 .

[34]  Rudolf Ahlswede,et al.  Coloring hypergraphs: A new approach to multi-user source coding, 1 , 1979 .

[35]  F. Kanaya,et al.  Coding Theorems on Correlated General Sources , 1995 .

[36]  C. Shannon Probability of error for optimal codes in a Gaussian channel , 1959 .

[37]  Vincent Y. F. Tan,et al.  Moderate-deviations of lossy source coding for discrete and Gaussian sources , 2011, 2012 IEEE International Symposium on Information Theory Proceedings.

[38]  Te Sun Han,et al.  An Information-Spectrum Approach to Capacity Theorems for the General Multiple-Access Channel , 1998, IEEE Trans. Inf. Theory.

[39]  Aaron B. Wagner,et al.  Moderate deviation analysis of channel coding: Discrete memoryless case , 2010, 2010 IEEE International Symposium on Information Theory.

[40]  E. Ordentlich,et al.  Inequalities for the L1 Deviation of the Empirical Distribution , 2003 .

[41]  Stéphane Boucheron,et al.  About priority encoding transmission , 2000, IEEE Trans. Inf. Theory.

[42]  H. Vincent Poor,et al.  Channel Coding Rate in the Finite Blocklength Regime , 2010, IEEE Transactions on Information Theory.

[43]  Anant Sahai,et al.  Universal Quadratic Lower Bounds on Source Coding Error Exponents , 2007, 2007 41st Annual Conference on Information Sciences and Systems.

[44]  Katalin Marton,et al.  A coding theorem for the discrete memoryless broadcast channel , 1979, IEEE Trans. Inf. Theory.

[45]  Imre Csiszár,et al.  Towards a general theory of source networks , 1980, IEEE Trans. Inf. Theory.

[46]  Imre Csiszár,et al.  Information Theory - Coding Theorems for Discrete Memoryless Systems, Second Edition , 2011 .

[47]  R. Ahlswede An elementary proof of the strong converse theorem for the multiple-access channel , 1982 .

[48]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.

[49]  Jun Chen,et al.  On the Duality and Difference Between Slepian-Wolf Coding and Channel Coding , 2007, 2007 IEEE Information Theory Workshop.

[50]  Yuval Kochman,et al.  The dispersion of joint source-channel coding , 2011, 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[51]  M.A. Khojastepour,et al.  How quickly can we approach channel capacity? , 2004, Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 2004..

[52]  Jacob Wolfowitz,et al.  Multiple Access Channels , 1978 .