Decoding of Polar Code by Using Deep Feed-Forward Neural Networks

With the success of image classification problems, deep learning is expanding its application areas. In this paper, we apply deep learning to decode a polar code. As an initial step for memoryless additive Gaussian noise channel, we consider a deep feed-forward neural network and investigate its decoding performances with respect to numerous configurations: the number of hidden layers, the number of nodes for each layer, and activation functions. Generally, the higher complex network yields a better performance. Comparing the performances of regular list decoding, we provide a guideline for the configuration parameters. Although the training of deep learning may require high computational complexity, it should be noted that the field application of trained networks can be accomplished at a low level complexity. Considering the level of performance and complexity, we believe that deep learning is a competitive decoding tool.

[1]  Timothy J. O'Shea,et al.  An Introduction to Machine Learning Communications Systems , 2017, ArXiv.

[2]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[3]  John Salvatier,et al.  Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.

[4]  Alexander Vardy,et al.  List decoding of polar codes , 2011, 2011 IEEE International Symposium on Information Theory Proceedings.

[5]  Erdal Arikan,et al.  Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels , 2008, IEEE Transactions on Information Theory.

[6]  Stephan ten Brink,et al.  On deep learning-based channel decoding , 2017, 2017 51st Annual Conference on Information Sciences and Systems (CISS).

[7]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[8]  Kurt Hornik,et al.  Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.

[9]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[10]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[11]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[12]  Tianqi Chen,et al.  Empirical Evaluation of Rectified Activations in Convolutional Network , 2015, ArXiv.