Deep Log-Likelihood Ratio Quantization

In this work, a deep learning-based method for log-likelihood ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting. A deep autoencoder network is trained to compress, quantize and reconstruct the bit log-likelihood ratios corresponding to a single transmitted symbol. Specifically, the encoder maps to a latent space with dimension equal to the number of sufficient statistics required to recover the inputs — equal to three in this case — while the decoder aims to reconstruct a noisy version of the latent representation with the purpose of modeling quantization effects in a differentiable way. Simulation results show that, when applied to a standard rate-1/2 low-density parity-check (LDPC) code, a finite precision compression factor of nearly three times is achieved when storing an entire codeword, with an incurred loss of performance lower than 0.15 dB compared to straightforward scalar quantization of the log-likelihood ratios and the method is competitive with state-of-the-art approaches.

[1]  W. Pearlman,et al.  Optimal Quantization of the Rayleigh Probability Distribution , 1979, IEEE Trans. Commun..

[2]  Hamed Haddadi,et al.  Deep Learning in Mobile and Wireless Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[3]  Lars P. B. Christensen,et al.  Maximum Mutual Information Vector Quantization of Log-Likelihood Ratios for Memory Efficient HARQ Implementations , 2010, 2010 Data Compression Conference.

[4]  Robert W. Heath,et al.  Capacity Analysis of One-Bit Quantized MIMO Systems With Transmitter Channel State Information , 2014, IEEE Transactions on Signal Processing.

[5]  Brian M. Kurkoski,et al.  Quantization of Binary-Input Discrete Memoryless Channels , 2011, IEEE Transactions on Information Theory.

[6]  Wolfgang Rave Quantization of Log-Likelihood Ratios to Maximize Mutual Information , 2009, IEEE Signal Processing Letters.

[7]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[8]  R. Gallager Stochastic Processes , 2014 .

[9]  Sriram Vishwanath,et al.  Deep Learning-Based Quantization of L-Values for Gray-Coded Modulation , 2019, 2019 IEEE Globecom Workshops (GC Wkshps).

[10]  Kiran Karra,et al.  Learning to communicate: Channel auto-encoders, domain specific regularizers, and attention , 2016, 2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT).

[11]  Gerald Matz,et al.  On quantization of log-likelihood ratios for maximum mutual information , 2015, 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[12]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[13]  Pierre Baldi,et al.  Autoencoders, Unsupervised Learning, and Deep Architectures , 2011, ICML Unsupervised and Transfer Learning.

[14]  Paul H. Siegel,et al.  Quantized min-sum decoders with low error floor for LDPC codes , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.

[15]  François Chollet,et al.  Keras: The Python Deep Learning library , 2018 .

[16]  Stephan ten Brink,et al.  OFDM-Autoencoder for End-to-End Learning of Communications Systems , 2018, 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[17]  Lucas Theis,et al.  Lossy Image Compression with Compressive Autoencoders , 2017, ICLR.