Autoregressive Belief Propagation for Decoding Block Codes

We revisit recent methods that employ graph neural networks for decoding error correcting codes and employ messages that are computed in an autoregressive manner. The outgoing messages of the variable nodes are conditioned not only on the incoming messages, but also on an estimation of the SNR and on the inferred codeword and on two downstream computations: (i) an extended vector of parity check outcomes, (ii) the mismatch between the inferred codeword and the reencoding of the information bits of this codeword. Unlike most learned methods in the field, our method violates the symmetry conditions that enable the other methods to train exclusively with the zero-word. Despite not having the luxury of training on a single word, and the inability to train on more than a small fraction of the relevant sample space, we demonstrate effective training. The new method obtains a bit error rate that outperforms the latest methods by a sizable margin.

[1]  Alexandre Graell i Amat,et al.  Pruning and Quantizing Neural Belief Propagation Decoders , 2020, IEEE Journal on Selected Areas in Communications.

[2]  Yair Be’ery,et al.  Deep Ensemble of Weighted Viterbi Decoders for Tail-Biting Convolutional Codes , 2020, 2020 IEEE Information Theory Workshop (ITW).

[3]  Yonina C. Eldar,et al.  Ensemble Wrapper Subsampling for Deep Modulation Classification , 2020, IEEE Transactions on Cognitive Communications and Networking.

[4]  Max Welling,et al.  Neural Enhanced Belief Propagation on Factor Graphs , 2020, AISTATS.

[5]  Alexandre Graell i Amat,et al.  Learned Decimation for Neural Belief Propagation Decoders , 2020, ArXiv.

[6]  Nir Weinberger,et al.  Learning Additive Noise Channels: Generalization Bounds and Algorithms , 2020, 2020 IEEE International Symposium on Information Theory (ISIT).

[7]  Yonina C. Eldar,et al.  Deep Soft Interference Cancellation for MIMO Detection , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[8]  Stephan ten Brink,et al.  WGAN-based Autoencoder Training Over-the-air , 2020, 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[9]  Yair Be’ery,et al.  Data-Driven Ensembles for Deep and Hard-Decision Hybrid Decoding , 2020, 2020 IEEE International Symposium on Information Theory (ISIT).

[10]  David Burshtein,et al.  Unsupervised Linear and Nonlinear Channel Equalization and Decoding Using Variational Autoencoders , 2019, IEEE Transactions on Cognitive Communications and Networking.

[11]  Sreeram Kannan,et al.  Deepcode: Feedback Codes via Deep Learning , 2018, IEEE Journal on Selected Areas in Information Theory.

[12]  Shu Lin,et al.  Finite Alphabet Iterative Decoding of LDPC Codes with Coarsely Quantized Neural Networks , 2019, 2019 IEEE Global Communications Conference (GLOBECOM).

[13]  P. Viswanath,et al.  Turbo Autoencoder: Deep learning based channel codes for point-to-point communication channels , 2019, NeurIPS.

[14]  Lior Wolf,et al.  Hyper-Graph-Network Decoders for Block Codes , 2019, NeurIPS.

[15]  Riccardo Raheli,et al.  Reinforcement Learning for Channel Coding: Learned Bit-Flipping Decoding , 2019, 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[16]  Yonina C. Eldar,et al.  Fast Deep Learning for Automatic Modulation Classification , 2019, ArXiv.

[17]  Ami Wiesel,et al.  Learning to Detect , 2018, IEEE Transactions on Signal Processing.

[18]  W. Hager,et al.  and s , 2019, Shallow Water Hydraulics.

[19]  2018 IEEE Information Theory Workshop (ITW) , 2018 .

[20]  Henry D. Pfister,et al.  What Can Machine Learning Teach Us about Communications? , 2018, 2018 IEEE Information Theory Workshop (ITW).

[21]  R. Sarpong,et al.  Bio-inspired synthesis of xishacorenes A, B, and C, and a new congener from fuscol† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c9sc02572c , 2019, Chemical science.

[22]  Sreeram Kannan,et al.  Communication Algorithms via Deep Learning , 2018, ICLR.

[23]  David Burshtein,et al.  Blind Channel Equalization Using Variational Autoencoders , 2018, 2018 IEEE International Conference on Communications Workshops (ICC Workshops).

[24]  David Burshtein,et al.  Deep Learning Methods for Improved Decoding of Linear Codes , 2017, IEEE Journal of Selected Topics in Signal Processing.

[25]  Neri Merhav,et al.  Random-coding error exponent of variable-length codes with a single-bit noiseless feedback , 2017, 2017 IEEE Information Theory Workshop (ITW).

[26]  Eitan Yaakobi,et al.  Codes for graph erasures , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[27]  Yi Hong,et al.  Polar Codes for Block Fading Channels , 2017, 2017 IEEE Wireless Communications and Networking Conference Workshops (WCNCW).

[28]  Stephan ten Brink,et al.  Scaling Deep Learning-Based Decoding of Polar Codes via Partitioning , 2017, GLOBECOM 2017 - 2017 IEEE Global Communications Conference.

[29]  Stephan ten Brink,et al.  On deep learning-based channel decoding , 2017, 2017 51st Annual Conference on Information Sciences and Systems (CISS).

[30]  Warren J. Gross,et al.  Neural offset min-sum decoding , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[31]  Ido Tal On the Construction of Polar Codes for Channels With Moderate Input Alphabet Sizes , 2017, IEEE Transactions on Information Theory.

[32]  Yair Be'ery,et al.  Learning to decode linear codes using deep learning , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[33]  Assaf Ben-Yishai,et al.  The Gaussian channel with noisy feedback: improving reliability via interaction , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[34]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[35]  Michael Gastpar,et al.  Polar Codes for Broadcast Channels , 2013, IEEE Transactions on Information Theory.

[36]  Igal Sason,et al.  On the universality of LDPC code ensembles under belief propagation and ML decoding , 2010, 2010 IEEE 26-th Convention of Electrical and Electronics Engineers in Israel.

[37]  Amir Leshem,et al.  Pseudo Prior Belief Propagation for densely connected discrete graphs , 2010, 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo).

[38]  John R. Hershey,et al.  Super-human multi-talker speech recognition: A graphical modeling approach , 2010, Comput. Speech Lang..

[39]  Yann LeCun,et al.  Dynamic Factor Graphs for Time Series Modeling , 2009, ECML/PKDD.

[40]  Erdal Arikan,et al.  Channel polarization: A method for constructing capacity-achieving codes , 2008, 2008 IEEE International Symposium on Information Theory.

[41]  Rüdiger L. Urbanke,et al.  Modern Coding Theory , 2008 .

[42]  Nikos Komodakis,et al.  Image Completion Using Efficient Belief Propagation Via Priority Scheduling and Dynamic Pruning , 2007, IEEE Transactions on Image Processing.

[43]  Neil Genzlinger A. and Q , 2006 .

[44]  N. Varnica,et al.  Incremental Redundancy Hybrid ARQ with LDPC and Raptor Codes , 2005 .

[45]  Daniel P. Huttenlocher,et al.  Efficient Belief Propagation for Early Vision , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[46]  Nanning Zheng,et al.  Stereo Matching Using Belief Propagation , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[47]  Kevin Barraclough,et al.  I and i , 2001, BMJ : British Medical Journal.

[48]  Robert G. Gallager,et al.  Low-density parity-check codes , 1962, IRE Trans. Inf. Theory.

[49]  Dwijendra K. Ray-Chaudhuri,et al.  Binary mixture flow with free energy lattice Boltzmann methods , 2022, arXiv.org.