Error Probability Analysis of Binary Asymmetric Channels

In his world-famous paper of 1948, Shannon defined channel capacity as the ultimate rate at which information can be transmitted over a communication channel with an error probability that will vanish if we allow the blocklength to get infinitely large. While this result is of tremendous theoretical importance, the reality of practical systems looks quite different: no communication system will tolerate an infinite delay caused by an extremely large blocklength, nor can it deal with the computational complexity of decoding such huge codewords. On the other hand, it is not necessary to have an error probability that is exactly zero either, a small, but finite value will suffice. Therefore, the question arises what can be done in a practical scheme. In particular, what is the maximal rate at which information can be transmitted over a communication channel for a given fixed maximum blocklength (i.e., a fixed maximum delay) if we allow a certain maximal probability of error? In this project, we have started to study these questions. Block-codes with very short blocklength over the most general binary channel, the binary asymmetric channel (BAC), are investigated. It is shown that for only two possible messages, flip-flop codes are optimal, however, depending on the blocklength and the channel parameters, not necessarily the linear flipflop code. Further it is shown that the optimal decoding rule is a threshold rule. Some fundamental dependencies of the best code on the channel are given. Block-codes with a very small number of codewords are investigated for the two special binary memoryless channels, the binary symmetric channel (BSC) and the Z-channel (ZC). The optimal (in the sense of minimum average error probability, using maximum likelihood decoding) code structure is derived for the cases of two, three, and four codewords and an arbitrary blocklength. It is shown that for two possible messages, on a BSC, the so-called flip codes of type t are optimal for any t, while on a ZC, the flip code of type 0 is optimal. For codes with three or four messages it is shown that the so-called weak flip codes of some given type are optimal where the type depends on the blocklength. For all cases an algorithm is presented that constructs an optimal code for blocklength n recursively from an optimal code of length n− 1. In the situation of two and four messages, the optimal code is shown to be linear. For the ZC a recursive optimal code design is conjectured in the case of five possible messages. The derivation of these optimal codes relies heavily on a new approach of constructing and analyzing the code-matrix not row-wise (codewords), but column-wise. Moreover, these results also prove that the minimum Hamming distance might be the wrong design criterion for optimal codes even for very symmetric channels like the BSC.

[1]  S. Verdú,et al.  Channel Coding Rate in the Finite Blocklength Regime , 2010, IEEE Transactions on Information Theory.

[2]  Yunghsiang Sam Han,et al.  On the coding scheme for joint channel estimation and error correction over block fading channels , 2009, 2009 IEEE 20th International Symposium on Personal, Indoor and Mobile Radio Communications.

[3]  Shu Lin,et al.  Error control coding , 2004 .

[4]  S. Shamai,et al.  The empirical distribution of good codes , 1995, Proceedings of 1995 IEEE International Symposium on Information Theory.

[5]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[6]  N. Sloane,et al.  Lower Bounds to Error Probability for Coding on Discrete Memoryless Channels. I , 1993 .

[7]  D. A. Bell,et al.  Information Theory and Reliable Communication , 1969 .