Communication Algorithms via Deep Learning

Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parameterized by recurrent neural network (RNN) architectures. We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong generalizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.

[1]  Yair Be'ery,et al.  Learning to decode linear codes using deep learning , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[2]  Stephan ten Brink,et al.  Scaling Deep Learning-Based Decoding of Polar Codes via Partitioning , 2017, GLOBECOM 2017 - 2017 IEEE Global Communications Conference.

[3]  John Cocke,et al.  Optimal decoding of linear codes for minimizing symbol error rate (Corresp.) , 1974, IEEE Trans. Inf. Theory.

[4]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[5]  Stephan ten Brink,et al.  Deep Learning Based Communication Over the Air , 2017, IEEE Journal of Selected Topics in Signal Processing.

[6]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[7]  Geoffrey A. Sanders Effects of Radar Interference on LTE (FDD) eNodeB and UE Receiver Performance in the 3.5 GHz Band , 2014 .

[8]  Van Nostrand,et al.  Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm , 1967 .

[9]  Sumit Roy,et al.  Impact and mitigation of narrow-band radar interference in down-link LTE , 2015, 2015 IEEE International Conference on Communications (ICC).

[10]  Rajiv Laroia,et al.  OFDMA Mobile Broadband Communications: List of Abbreviations , 2013 .

[11]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[12]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[13]  Stephan ten Brink,et al.  On deep learning-based channel decoding , 2017, 2017 51st Annual Conference on Information Sciences and Systems (CISS).

[14]  Frank Sanders Effects of Radar Interference on LTE Base Station Receiver Performance , 2013 .

[15]  Can Isik,et al.  Neural network implementation of the BCJR algorithm , 2007, Digit. Signal Process..

[16]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[17]  Ran El-Yaniv,et al.  Binarized Neural Networks , 2016, ArXiv.

[18]  A. Glavieux,et al.  Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1 , 1993, Proceedings of ICC '93 - IEEE International Conference on Communications.

[19]  Timothy J. O'Shea,et al.  An Introduction to Machine Learning Communications Systems , 2017, ArXiv.

[20]  David Tse,et al.  Fundamentals of Wireless Communication , 2005 .

[21]  Amos Lapidoth,et al.  Nearest neighbor decoding for additive non-Gaussian noise channels , 1996, IEEE Trans. Inf. Theory.

[22]  Dawn Xiaodong Song,et al.  Making Neural Programming Architectures Generalize via Recursion , 2017, ICLR.

[23]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[24]  Xiaohu You,et al.  Improved polar decoder based on deep learning , 2017, 2017 IEEE International Workshop on Signal Processing Systems (SiPS).

[25]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[26]  Nando de Freitas,et al.  Neural Programmer-Interpreters , 2015, ICLR.

[27]  Stephen B. Wicker,et al.  An Artificial Neural Net Viterbi Decoder , 1995, Proceedings of 1995 IEEE International Symposium on Information Theory.