A Lagrange-Dual Lower Bound to the Error Exponent Function of the Typical Random Code

A Lagrange-dual (Gallager-style) lower bound is derived for the error exponent function of the typical random code (TRC) pertaining to the i.i.d. random coding ensemble and mismatched stochastic likelihood decoding. While the original expression, derived from the method of types (the Csiszar-style expression) involves minimization over probability distributions defined on the channel input--output alphabets, the new Lagrange-dual formula involves optimization of five parameters, independently of the alphabet sizes. For both stochastic and deterministic mismatched decoding (including maximum likelihood decoding as a special case),we provide a rather comprehensive discussion on the insight behind the various ingredients of this formula and describe how its behavior varies as the coding rate exhausts the relevant range. Among other things, it is demonstrated that this expression simultaneously generalizes both the expurgated error exponent function (at zero rate) and the classical random coding exponent function at high rates, where it also meets the sphere--packing bound.

[1]  Amin Gohari,et al.  A technique for deriving one-shot achievability results in network information theory , 2013, 2013 IEEE International Symposium on Information Theory.

[2]  Ali Nazari Error Exponent for Discrete Memoryless Multiple-Access Channels. , 2011 .

[3]  Neri Merhav,et al.  Error Exponents of Typical Random Trellis Codes , 2019, IEEE Transactions on Information Theory.

[4]  Andrew J. Viterbi,et al.  Principles of Digital Communication and Coding , 1979 .

[5]  Achilleas Anastasopoulos,et al.  Error Exponent for Multiple Access Channels: Upper Bounds , 2015, IEEE Transactions on Information Theory.

[6]  Neri Merhav,et al.  Error Exponents of Typical Random Codes of Source-Channel Coding , 2019, 2019 IEEE Information Theory Workshop (ITW).

[7]  H. Vincent Poor,et al.  The Likelihood Encoder for Lossy Compression , 2014, IEEE Transactions on Information Theory.

[8]  Neri Merhav Correction to “The Generalized Stochastic Likelihood Decoder: Random Coding and Expurgated Bounds” , 2017, IEEE Transactions on Information Theory.

[9]  Sergio Verdú,et al.  On α-decodability and α-likelihood decoder , 2017, 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[10]  Albert Guillén i Fàbregas,et al.  The likelihood decoder: Error exponents and mismatch , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[11]  Young-Han Kim,et al.  Variations on a Theme by Liu, Cuff, and Verdú: The Power of Posterior Sampling , 2018, 2018 IEEE Information Theory Workshop (ITW).

[12]  Alexander Barg,et al.  Random codes: Minimum distances and error exponents , 2002, IEEE Trans. Inf. Theory.

[13]  Neri Merhav,et al.  Error Exponents of Typical Random Codes for the Colored Gaussian Channel , 2018, 2019 IEEE International Symposium on Information Theory (ISIT).

[14]  R. Gallager Information Theory and Reliable Communication , 1968 .

[15]  Gerard Battail On Random-Like Codes , 1995, Information Theory and Applications.

[16]  Neri Merhav The Generalized Stochastic Likelihood Decoder: Random Coding and Expurgated Bounds , 2017, IEEE Trans. Inf. Theory.

[17]  Neri Merhav,et al.  Error Exponents of Typical Random Codes , 2017, 2018 IEEE International Symposium on Information Theory (ISIT).

[18]  Neri Merhav,et al.  Large Deviations of Typical Random Codes , 2019, 2019 IEEE International Symposium on Information Theory (ISIT).