Achieving Low Complexity Neural Decoders via Iterative Pruning

The advancement of deep learning has led to the development of neural decoders for low latency communications. However, neural decoders can be very complex which can lead to increased computation and latency. We consider iterative pruning approaches (such as the lottery ticket hypothesis algorithm) to prune weights in neural decoders. Decoders with fewer number of weights can have lower latency and lower complexity while retaining the accuracy of the original model. This will make neural decoders more suitable for mobile and other edge devices with limited computational power. We also propose semi-soft decision decoding for neural decoders which can be used to improve the bit error rate performance of the pruned network.

[1]  Tiejun Lv,et al.  Enabling Technologies for Ultra-Reliable and Low Latency Communications: From PHY and MAC Layer Perspectives , 2019, IEEE Communications Surveys & Tutorials.

[2]  Mehul Motani,et al.  Multi-Label and Concatenated Neural Block Decoders , 2020, 2020 IEEE International Symposium on Information Theory (ISIT).

[3]  Erdal Arikan,et al.  Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels , 2008, IEEE Transactions on Information Theory.

[4]  David Chase,et al.  Class of algorithms for decoding block codes with channel measurement information , 1972, IEEE Trans. Inf. Theory.

[5]  L. Litwin,et al.  Error control coding , 2001 .

[6]  Warren J. Gross,et al.  Neural Successive Cancellation Decoding of Polar Codes , 2018, 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[7]  Mehul Motani,et al.  Low-Latency Neural Decoders for Linear and Non-Linear Block Codes , 2019, 2019 IEEE Global Communications Conference (GLOBECOM).

[8]  Mehul Motani,et al.  Multi-Label Neural Decoders for Block Codes , 2020, ICC 2020 - 2020 IEEE International Conference on Communications (ICC).

[10]  Michael Carbin,et al.  The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.