EQ-Net: A Unified Deep Learning Framework for Log-Likelihood Ratio Estimation and Quantization

In this work, we introduce EQ-Net: the first holistic framework that solves both the tasks of loglikelihood ratio (LLR) estimation and quantization using a data-driven method. We motivate our approach with theoretical insights on two practical estimation algorithms at the ends of the complexity spectrum and reveal a connection between the complexity of an algorithm and the information bottleneck method: simpler algorithms admit smaller bottlenecks when representing their solution. This motivates us to propose a two-stage algorithm that uses LLR compression as a pretext task for estimation and is focused on low-latency, high-performance implementations via deep neural networks. We carry out extensive experimental evaluation and demonstrate that our single architecture achieves state-of-the-art results on both tasks when compared to previous methods, with gains in quantization efficiency as high as 20% and reduced estimation latency by up to 60% when measured on general purpose and graphical processing units (GPU). In particular, our approach reduces the GPU inference latency by more than two times in several multiple-input multiple-output (MIMO) configurations. Finally, we demonstrate that our scheme is robust to distributional shifts and retains a significant part of its performance when evaluated on 5G channel models, as well as channel estimation errors.

[1]  Haim H. Permuter,et al.  Neural Network MIMO Detection for Coded Wireless Communication with Impairments , 2020, 2020 IEEE Wireless Communications and Networking Conference (WCNC).

[2]  Sriram Vishwanath,et al.  Deep Learning-Based Quantization of L-Values for Gray-Coded Modulation , 2019, 2019 IEEE Globecom Workshops (GC Wkshps).

[3]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[4]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..

[5]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[6]  Taufik Abrão,et al.  Ordered MMSE–SIC via sorted QR decomposition in ill conditioned large-scale MIMO channels , 2016, Telecommun. Syst..

[7]  Geoffrey Ye Li,et al.  Model-Driven Deep Learning for Joint MIMO Channel Estimation and Signal Detection , 2019, ArXiv.

[8]  David Tse,et al.  Fundamentals of Wireless Communication , 2005 .

[9]  Alexander A. Alemi,et al.  Deep Variational Information Bottleneck , 2017, ICLR.

[10]  Sergei Vassilvitskii,et al.  k-means++: the advantages of careful seeding , 2007, SODA '07.

[11]  Yonina C. Eldar,et al.  ViterbiNet: A Deep Learning Based Viterbi Algorithm for Symbol Detection , 2019, IEEE Transactions on Wireless Communications.

[12]  Mérouane Debbah,et al.  Wireless Networks Design in the Era of Deep Learning: Model-Based, AI-Based, or Both? , 2019, IEEE Transactions on Communications.

[13]  Jakob Hoydis,et al.  "Machine LLRning": Learning to Softly Demodulate , 2019, 2019 IEEE Globecom Workshops (GC Wkshps).

[14]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[15]  Reinaldo A. Valenzuela,et al.  V-BLAST: an architecture for realizing very high data rates over the rich-scattering wireless channel , 1998, 1998 URSI International Symposium on Signals, Systems, and Electronics. Conference Proceedings (Cat. No.98EX167).

[16]  H. Vincent Poor,et al.  Ultrareliable and Low-Latency Wireless Communication: Tail, Risk, and Scale , 2018, Proceedings of the IEEE.

[17]  Helmut Bölcskei,et al.  Soft-output sphere decoding: algorithms and VLSI implementation , 2008, IEEE Journal on Selected Areas in Communications.

[18]  Gerald Matz,et al.  On quantization of log-likelihood ratios for maximum mutual information , 2015, 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[19]  Li Ping,et al.  Orthogonal AMP , 2016, IEEE Access.