Optimal Training Channel Statistics for Neural-based Decoders

This work investigates the design of End-to-End channel coding based on deep learning. The focus is on the design of neural networks based channel decoders. We demonstrate the existence of an optimal training statistic for the cross-entropy loss which allows the network to generalize to channel statistics unseen during training while performing close to their optimal decision rule. Numerical results illustrate an application to Polar coding on binary input memoryless channels.

[1]  Yair Be'ery,et al.  Learning to decode linear codes using deep learning , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[2]  Jun Muramatsu,et al.  On the error probability of stochastic decision and stochastic decoding , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[3]  Yoni Choukroun,et al.  Deep Learning for Decoding of Linear Codes - A Syndrome-Based Approach , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[4]  L. G. Tallini,et al.  Neural nets for decoding error-correcting codes , 1995, IEEE Technical Applications Conference and Workshops. Northcon/95. Conference Record.

[5]  Erdal Arikan,et al.  Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels , 2008, IEEE Transactions on Information Theory.

[6]  Jehoshua Bruck,et al.  Neural networks, error-correcting codes, and polynomials over the binary n -cube , 1989, IEEE Trans. Inf. Theory.

[7]  Stephen B. Wicker,et al.  An Artificial Neural Net Viterbi Decoder , 1995, Proceedings of 1995 IEEE International Symposium on Information Theory.

[8]  Stephan ten Brink,et al.  On deep learning-based channel decoding , 2017, 2017 51st Annual Conference on Information Sciences and Systems (CISS).