Learning to Decode Polar Codes with Quantized LLRs Passing

In this paper, a weighted successive cancellation (WSC) algorithm is proposed to improve the decoding performance of polar codes with the quantized log-likelihood ratio (LLR). The weights used in the WSC are automatically learned by a neural network (NN). A novel NN model and its simplified architecture are build to select the optimal weights of the WSC, and all-zero codewords can train the NN. Besides, we impose the constraints on weights to direct the learning process. The small number of trainable parameters lead to faster learning without performance loss. Simulation results show that the WSC algorithm is valid to various codewords and the trained weights make it outperform SC algorithm with the same quantization precision. Notably, the WSC with 3-bit quantization precision achieves a near floating point performance for short length.

[1]  Erdal Arikan,et al.  Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels , 2008, IEEE Transactions on Information Theory.

[2]  Stephan ten Brink,et al.  On deep learning-based channel decoding , 2017, 2017 51st Annual Conference on Information Sciences and Systems (CISS).

[3]  Xiaohu You,et al.  Improved polar decoder based on deep learning , 2017, 2017 IEEE International Workshop on Signal Processing Systems (SiPS).

[4]  David Burshtein,et al.  Deep Learning Methods for Improved Decoding of Linear Codes , 2017, IEEE Journal of Selected Topics in Signal Processing.

[5]  Bane V. Vasic,et al.  Learning to Decode LDPC Codes with Finite-Alphabet Message Passing , 2018, 2018 Information Theory and Applications Workshop (ITA).

[6]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[7]  Yuen-Hsien Tseng,et al.  Neural Network Decoders for Linear Block Codes , 2002, Int. J. Comput. Eng. Sci..

[8]  Jehoshua Bruck,et al.  Neural networks, error-correcting codes, and polynomials over the binary n -cube , 1989, IEEE Trans. Inf. Theory.

[9]  Ja-Ling Wu,et al.  High-order perceptrons for decoding error-correcting codes , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[10]  Yair Be'ery,et al.  Learning to decode linear codes using deep learning , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[11]  Kai Niu,et al.  On uniform quantization for successive cancellation decoder of polar codes , 2014, 2014 IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC).

[12]  Warren J. Gross,et al.  Neural offset min-sum decoding , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[13]  Rüdiger L. Urbanke,et al.  Polar codes: Robustness of the successive cancellation decoder with respect to quantization , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.

[14]  Frank R. Kschischang,et al.  A Simplified Successive-Cancellation Decoder for Polar Codes , 2011, IEEE Communications Letters.