Deep Learning-Based Quantization of L-Values for Gray-Coded Modulation

In this work, a deep learning-based quantization scheme for log-likelihood ratio (L-value) storage in fading channels affected by interference is introduced. We derive the number of sufficient statistics required to exactly reconstruct the set of L-values corresponding to a channel use as 3+2xNI, where NI is the number of interferers. We analyze the dependency between the average magnitudes of different L-values and show they follow a consistent ordering, regardless of the channel coefficient or interference distribution. Based on this we design a deep autoencoder that jointly compresses and separately reconstructs each L-value, allowing the use of a weighted loss function that promotes more accurate reconstruction of low magnitude inputs. Our method is shown to be competitive with state-of- the-art maximum mutual information quantization schemes, reducing the required memory footprint by a ratio of up to two and achieving a loss of performance lower than 0.1 dB with less than two effective bits per L-value and lower than 0.04 dB with 2.25 effective bits. We demonstrate that the same network can be reused without further training on various channel models and error- correcting codes while preserving the same performance benefits.

[1]  Paul H. Siegel,et al.  Quantized Iterative Message Passing Decoders with Low Error Floor for LDPC Codes , 2012, IEEE Transactions on Communications.

[2]  D. Sculley,et al.  Web-scale k-means clustering , 2010, WWW '10.

[3]  Jakob Hoydis,et al.  An Introduction to Deep Learning for the Physical Layer , 2017, IEEE Transactions on Cognitive Communications and Networking.

[4]  Oriol Vinyals,et al.  Neural Discrete Representation Learning , 2017, NIPS.

[5]  Yunhui Guo,et al.  A Survey on Methods and Theories of Quantized Neural Networks , 2018, ArXiv.

[6]  Sriram Vishwanath,et al.  Deep Log-Likelihood Ratio Quantization , 2019, 2019 27th European Signal Processing Conference (EUSIPCO).

[7]  Sreeram Kannan,et al.  Deepcode: Feedback Codes via Deep Learning , 2018, IEEE Journal on Selected Areas in Information Theory.

[8]  Wolfgang Rave Quantization of Log-Likelihood Ratios to Maximize Mutual Information , 2009, IEEE Signal Processing Letters.

[9]  Gerald Matz,et al.  On quantization of log-likelihood ratios for maximum mutual information , 2015, 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[10]  Kezhi Wang,et al.  MIMO Channel Information Feedback Using Deep Recurrent Network , 2018, IEEE Communications Letters.

[11]  Sanjiv Kumar,et al.  On the Convergence of Adam and Beyond , 2018 .

[12]  Luca Benini,et al.  Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations , 2017, NIPS.

[13]  Lars P. B. Christensen,et al.  Maximum Mutual Information Vector Quantization of Log-Likelihood Ratios for Memory Efficient HARQ Implementations , 2010, 2010 Data Compression Conference.

[14]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[15]  Leszek Szczecinski,et al.  Distribution of L-values in gray-mapped M2-QAM: closed-form approximations and applications , 2009, IEEE Transactions on Communications.

[16]  Erik G. Ström,et al.  Gray Coding for Multilevel Constellations in Gaussian Noise , 2007, IEEE Transactions on Information Theory.