The MMI Decoder is Asymptotically Optimal for the Typical Random Code and for the Expurgated Code

We provide two results concerning the optimality of the maximum mutual information (MMI) decoder. First, we prove that the error exponents of the typical random codes under the optimal maximum likelihood (ML) decoder and the MMI decoder are equal. As a corollary to this result, we also show that the error exponents of the expurgated codes under the ML and the MMI decoders are equal. These results strengthen the well known result due to Csiszar and Korner, according to which, these decoders achieve equal random coding error exponents, since the error exponents of the typical random code and the expurgated code are strictly higher than the random coding error exponents, at least at low coding rates. While the universal optimality of the MMI decoder, in the random-coding error exponent sense, is easily proven by commuting the expectation over the channel noise and the expectation over the ensemble, when it comes to typical and expurgated exponents, this commutation can no longer be carried out. Therefore, the proof of the universal optimality of the MMI decoder must be completely different and it turns out to be highly non-trivial.

[1]  Neri Merhav,et al.  Error Exponents of Typical Random Codes , 2017, 2018 IEEE International Symposium on Information Theory (ISIT).

[2]  Neri Merhav,et al.  The generalized stochastic likelihood decoder: Random coding and expurgated bounds , 2015, 2016 IEEE International Symposium on Information Theory (ISIT).

[3]  Neri Merhav,et al.  Large Deviations Behavior of the Logarithmic Error Probability of Random Codes , 2019, IEEE Transactions on Information Theory.

[4]  Neri Merhav,et al.  A Lagrange–Dual Lower Bound to the Error Exponent of the Typical Random Code , 2020, IEEE Transactions on Information Theory.

[5]  Anelia Somekh-Baruch,et al.  Generalized Random Gilbert-Varshamov Codes , 2018, IEEE Transactions on Information Theory.

[6]  Imre Csiszár,et al.  Graph decomposition: A new key to coding theorems , 1981, IEEE Trans. Inf. Theory.

[7]  Neri Merhav,et al.  Trade-offs Between Error Exponents and Excess-Rate Exponents of Typical Slepian-Wolf Codes , 2020, ArXiv.

[8]  Achilleas Anastasopoulos,et al.  Error Exponent for Multiple Access Channels: Upper Bounds , 2015, IEEE Transactions on Information Theory.

[9]  Robert G. Gallager,et al.  A simple derivation of the coding theorem and some applications , 1965, IEEE Trans. Inf. Theory.

[10]  Neri Merhav,et al.  Error Exponents of Typical Random Trellis Codes , 2019, IEEE Transactions on Information Theory.

[11]  Imre Csiszár,et al.  Information Theory - Coding Theorems for Discrete Memoryless Systems, Second Edition , 2011 .

[12]  Alexander Barg,et al.  Random codes: Minimum distances and error exponents , 2002, IEEE Trans. Inf. Theory.

[13]  Sergio Verdú,et al.  On α-decodability and α-likelihood decoder , 2017, 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[14]  Neri Merhav,et al.  Error Exponents of Typical Random Codes for the Colored Gaussian Channel , 2018, 2019 IEEE International Symposium on Information Theory (ISIT).