On quantization of log-likelihood ratios for maximum mutual information

We consider mutual-information-optimal quantization of log-likelihood ratios (LLRs). An efficient algorithm is presented for the design of LLR quantizers based either on the unconditional LLR distribution or on LLR samples. In the latter case, a small number of samples is sufficient and no training data are required. Therefore, our algorithm can be used to design LLR quantizers during data transmission. The proposed algorithm is reminiscent of the famous Lloyd-Max algorithm and is not restricted to any particular LLR distribution.

[1]  David G. Messerschmitt,et al.  Quantizing for maximum output entropy (Corresp.) , 1971, IEEE Trans. Inf. Theory.

[2]  Wolfgang Rave Quantization of Log-Likelihood Ratios to Maximize Mutual Information , 2009, IEEE Signal Processing Letters.

[3]  Gerald Matz,et al.  Joint network-channel coding in the multiple-access relay channel: Beyond two sources , 2012, 2012 5th International Symposium on Communications, Control and Signal Processing.

[4]  Georg Zeitler,et al.  Low-Precision A/D Conversion for Maximum Information Rate in Channels with Memory , 2012, IEEE Transactions on Communications.

[5]  Andrew C. Singer,et al.  Low-Precision A/D Conversion for Maximum Information Rate in Channels with Memory , 2012, IEEE Trans. Commun..

[6]  Gerald Matz,et al.  Channel-optimized vector quantization with mutual information as fidelity criterion , 2013, 2013 Asilomar Conference on Signals, Systems and Computers.

[7]  Gerald Matz,et al.  Quantization for soft-output demodulators in bit-interleaved coded modulation systems , 2009, 2009 IEEE International Symposium on Information Theory.

[8]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[9]  Lars P. B. Christensen,et al.  Maximum Mutual Information Vector Quantization of Log-Likelihood Ratios for Memory Efficient HARQ Implementations , 2010, 2010 Data Compression Conference.

[10]  V. D. Pietra,et al.  Minimum Impurity Partitions , 1992 .

[11]  Hideki Yagi,et al.  Concatenation of a discrete memoryless channel and a quantizer , 2010, 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo).

[12]  Henk Wymeersch,et al.  Iterative Receiver Design , 2007 .

[13]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[14]  Richard E. Blahut,et al.  Computation of channel capacity and rate-distortion functions , 1972, IEEE Trans. Inf. Theory.

[15]  Jörn Ostermann,et al.  A Lloyd-Max based quantizer of L-values for AWGN and Rayleigh fading channel , 2014, 2014 Sixth International Conference on Wireless Communications and Signal Processing (WCSP).

[16]  Andreas Winkelbauer,et al.  BLIND PERFORMANCE ESTIMATION AND QUANTIZER DESIGN WITH APPLICATIONS TO RELAY NETWORKS , 2014 .

[17]  Joel Max,et al.  Quantizing for minimum distortion , 1960, IRE Trans. Inf. Theory.

[18]  Suguru Arimoto,et al.  An algorithm for computing the capacity of arbitrary discrete memoryless channels , 1972, IEEE Trans. Inf. Theory.