Adversarial Neural Networks for Error Correcting Codes

Error correcting codes are a fundamental component in modern day communication systems, demanding extremely high throughput, ultra-reliability and low latency. Recent approaches using machine learning (ML) models as the decoders offer both improved performance and great adaptability to unknown environments, where traditional decoders struggle. We introduce a general framework to further boost the performance and applicability of ML models. We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words, and, hence, guides the decoding models to recover transmitted codewords. Our framework is game-theoretic, motivated by generative adversarial networks (GANs), with the decoder and discriminator competing in a zero-sum game. The decoder learns to simultaneously decode and generate codewords while the discriminator learns to tell the differences between decoded outputs and codewords. Thus, the decoder is able to decode noisy received signals into codewords, increasing the probability of successful decoding. We show a strong connection of our framework with the optimal maximum likelihood decoder by proving that this decoder defines a Nash’s equilibrium point of our game. Hence, training to equilibrium has a good possibility of achieving the optimal maximum likelihood performance. Moreover, our framework does not require training labels, which are typically unavailable during communications, and, thus, seemingly can be trained online and adapt to channel dynamics. To demonstrate the performance of our framework, we combine it with the very recent neural decoders and show improved performance compared to the original models and traditional decoding algorithms on various codes.

[1]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[2]  Geoffrey Ye Li,et al.  A Model-Driven Deep Learning Network for MIMO Detection , 2018, 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP).

[3]  Warren J. Gross,et al.  Neural offset min-sum decoding , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[4]  Yonina C. Eldar,et al.  DeepSIC: Deep Soft Interference Cancellation for Multiuser MIMO Detection , 2020, IEEE Transactions on Wireless Communications.

[5]  Max Welling,et al.  Neural Enhanced Belief Propagation on Factor Graphs , 2020, AISTATS.

[6]  Alexios Balatsoukas-Stimming,et al.  Neural-Network Optimized 1-bit Precoding for Massive MU-MIMO , 2019, 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[7]  W. Freeman,et al.  Generalized Belief Propagation , 2000, NIPS.

[8]  Warren J. Gross,et al.  Learning from the Syndrome , 2018, 2018 52nd Asilomar Conference on Signals, Systems, and Computers.

[9]  Yonina C. Eldar,et al.  ViterbiNet: Symbol Detection Using a Deep Learning Based Viterbi Algorithm , 2019, 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[10]  Jonathan Le Roux,et al.  Deep Unfolding: Model-Based Inspiration of Novel Deep Architectures , 2014, ArXiv.

[11]  William T. Freeman,et al.  Understanding belief propagation and its generalizations , 2003 .

[12]  Alexios Balatsoukas-Stimming,et al.  Deep Unfolding for Communications Systems: A Survey and Some New Directions , 2019, 2019 IEEE International Workshop on Signal Processing Systems (SiPS).

[13]  Stefano Ermon,et al.  Belief Propagation Neural Networks , 2020, NeurIPS.

[14]  Ami Wiesel,et al.  Learning to Detect , 2018, IEEE Transactions on Signal Processing.

[15]  Wei Zhang,et al.  Iterative Soft Decoding of Reed-Solomon Codes Based on Deep Learning , 2020, IEEE Communications Letters.

[16]  David Burshtein,et al.  Deep Learning Methods for Improved Decoding of Linear Codes , 2017, IEEE Journal of Selected Topics in Signal Processing.