The Asymptotic Uniformity of the Output of Convolutional Codes Under Markov Inputs

In this letter, we prove a published conjecture on the asymptotic uniformity of the outputs of a convolutional encoder under biased inputs. These results are interesting in light of recent research on joint source-channel coding as well as source coding using turbo codes in which the constituent encoders are convolutional codes. In particular, it is well-known that in many situations a good code should result in a uniform distribution on blocks of consecutive encoded symbols. The results presented here provide insights into the choice of encoders in such scenarios.

[1]  Ivan J. Fair,et al.  On the Power Spectral Density of Self-Synchronizing Scrambled Sequences , 1998, IEEE Trans. Inf. Theory.

[2]  Andrea Abrardo Performance bounds and codes design criteria for channel decoding with a-priori information , 2009, IEEE Transactions on Wireless Communications.

[3]  Fred Daneshgaran,et al.  Iterative joint channel decoding of correlated sources employing serially concatenated convolutional codes , 2005, IEEE Transactions on Information Theory.

[4]  Ying Zhao,et al.  Compression of binary memoryless sources using punctured turbo codes , 2002, IEEE Communications Letters.

[5]  Ying Zhao,et al.  Compression of correlated binary sources using turbo codes , 2001, IEEE Communications Letters.

[6]  S. Shamai,et al.  The empirical distribution of good codes , 1995, Proceedings of 1995 IEEE International Symposium on Information Theory.

[7]  David G. Leeper A universal digital data scrambler , 1973 .

[8]  Gil I. Shamir,et al.  Non-systematic low-density parity-check codes for nonuniform sources , 2005, Proceedings. International Symposium on Information Theory, 2005. ISIT 2005..

[9]  Fady Alajaji,et al.  Transmission of nonuniform memoryless sources via nonsystematic turbo codes , 2004, IEEE Transactions on Communications.