Serial vs. Parallel Turbo-Autoencoders and Accelerated Training for Learned Channel Codes

Attracted by its scalability towards practical code-word lengths, we revisit the idea of Turbo-autoencoders for end-to-end learning of PHY-Layer communications. For this, we study the existing concepts of Turbo-autoencoders from the literature and compare the concept with state-of-the-art classical coding schemes. We propose a new component-wise training algorithm based on the idea of Gaussian a priori distributions that reduces the overall training time by almost a magnitude. Further, we propose a new serial architecture inspired by classical serially concatenated Turbo code structures and show that a carefully optimized interface between the two component autoencoders is required. To the best of our knowledge, these serial Turbo autoencoder structures are the best known neural network based learned sequences that can be trained from scratch without any required expert knowledge in the domain of channel codes.

[1]  Stephan ten Brink,et al.  On deep learning-based channel decoding , 2017, 2017 51st Annual Conference on Information Sciences and Systems (CISS).

[2]  Ran El-Yaniv,et al.  Binarized Neural Networks , 2016, NIPS.

[3]  Yair Be'ery,et al.  Learning to decode linear codes using deep learning , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[4]  Stephan ten Brink,et al.  Convergence behavior of iteratively decoded parallel concatenated codes , 2001, IEEE Trans. Commun..

[5]  H. Vincent Poor,et al.  Channel Coding Rate in the Finite Blocklength Regime , 2010, IEEE Transactions on Information Theory.

[6]  Sreeram Kannan,et al.  Deepcode: Feedback Codes via Deep Learning , 2018, IEEE Journal on Selected Areas in Information Theory.

[7]  Alex J. Grant,et al.  Convergence analysis and optimal scheduling for multiple concatenated codes , 2005, IEEE Transactions on Information Theory.

[8]  P. Viswanath,et al.  Turbo Autoencoder: Deep learning based channel codes for point-to-point communication channels , 2019, NeurIPS.

[9]  Jakob Hoydis,et al.  Trainable Communication Systems: Concepts and Prototype , 2020, IEEE Transactions on Communications.

[10]  Stephan ten Brink,et al.  Deep Learning Based Communication Over the Air , 2017, IEEE Journal of Selected Topics in Signal Processing.

[11]  Gianluigi Liva,et al.  Code Design for Short Blocks: A Survey , 2016, ArXiv.

[12]  Kiran Karra,et al.  Learning to communicate: Channel auto-encoders, domain specific regularizers, and attention , 2016, 2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT).

[13]  Ami Wiesel,et al.  Deep MIMO detection , 2017, 2017 IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[14]  Sreeram Kannan,et al.  LEARN Codes: Inventing Low-Latency Codes via Recurrent Neural Networks , 2018, ICC 2019 - 2019 IEEE International Conference on Communications (ICC).

[15]  Jakob Hoydis,et al.  An Introduction to Deep Learning for the Physical Layer , 2017, IEEE Transactions on Cognitive Communications and Networking.

[16]  Yonina C. Eldar,et al.  ViterbiNet: A Deep Learning Based Viterbi Algorithm for Symbol Detection , 2019, IEEE Transactions on Wireless Communications.

[17]  A. Glavieux,et al.  Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1 , 1993, Proceedings of ICC '93 - IEEE International Conference on Communications.