On Binary Quantizer For Maximizing Mutual Information

We consider a channel with a binary input <inline-formula> <tex-math notation="LaTeX">$X$ </tex-math></inline-formula> being corrupted by a continuous-valued noise that results in a continuous-valued output <inline-formula> <tex-math notation="LaTeX">$Y$ </tex-math></inline-formula>. An optimal binary quantizer is used to quantize the continuous-valued output <inline-formula> <tex-math notation="LaTeX">$Y$ </tex-math></inline-formula> to the final binary output <inline-formula> <tex-math notation="LaTeX">$Z$ </tex-math></inline-formula> to maximize the mutual information <inline-formula> <tex-math notation="LaTeX">$I(X; Z)$ </tex-math></inline-formula>. We show that when the ratio of the channel conditional density <inline-formula> <tex-math notation="LaTeX">$r(y) = \frac {P(Y=y|X=0)}{P(Y = y|X=1)}$ </tex-math></inline-formula> is a strictly increasing or decreasing function of <inline-formula> <tex-math notation="LaTeX">$y$ </tex-math></inline-formula>, then a quantizer having a single threshold can maximize mutual information. Furthermore, we show that an optimal quantizer (possibly with multiple thresholds) is the one with the thresholding vector whose elements are all the solutions of <inline-formula> <tex-math notation="LaTeX">$r(y)=r^{*}$ </tex-math></inline-formula> for some constant <inline-formula> <tex-math notation="LaTeX">$r^{*}>0$ </tex-math></inline-formula>. In addition, we also characterize necessary conditions using fixed point theorem for the optimality and uniqueness of a quantizer. Based on these conditions, we propose an efficient procedure for determining all locally optimal quantizers, and thus, a globally optimal quantizer can be found. Our results also confirm some previous results using alternative elementary proofs.

[1]  Ken-ichi Iwata,et al.  Quantizer design for outputs of binary-input discrete memoryless channels using SMAWK algorithm , 2014, 2014 IEEE International Symposium on Information Theory.

[2]  Ferdinando Cicalese,et al.  New results on information theoretic clustering , 2019, ICML.

[3]  Brian M. Kurkoski,et al.  Low-complexity quantization of discrete memoryless channels , 2016, 2016 International Symposium on Information Theory and Its Applications (ISITA).

[4]  Brian M. Kurkoski,et al.  Decoding LDPC codes with mutual information-maximizing lookup tables , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[5]  Thinh Nguyen,et al.  On Closed Form Capacities of Discrete Memoryless Channels , 2018, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring).

[6]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[7]  V. D. Pietra,et al.  Minimum Impurity Partitions , 1992 .

[8]  Richard D. Wesel,et al.  Soft Information for LDPC Decoding in Flash: Mutual-Information Optimized Quantization , 2011, 2011 IEEE Global Telecommunications Conference - GLOBECOM 2011.

[9]  Jonathan R. M. Hosking,et al.  Partitioning Nominal Attributes in Decision Trees , 1999, Data Mining and Knowledge Discovery.

[10]  Upamanyu Madhow,et al.  On the limits of communication with low-precision analog-to-digital conversion at the receiver , 2009, IEEE Transactions on Communications.

[11]  Xiaolin Wu,et al.  Optimal Quantization by Matrix Searching , 1991, J. Algorithms.

[12]  Ming Yang,et al.  Compressing Deep Convolutional Networks using Vector Quantization , 2014, ArXiv.

[13]  Rudolf Mathar,et al.  Optimum one-bit quantization , 2015, 2015 IEEE Information Theory Workshop - Fall (ITW).

[14]  Morris Goldberg,et al.  Image Compression Using Adaptive Vector Quantization , 1986, IEEE Trans. Commun..

[15]  W. A. Kirk,et al.  Handbook of metric fixed point theory , 2001 .

[16]  Wentu Song,et al.  Dynamic Programming for Discrete Memoryless Channel Quantization , 2019, ArXiv.

[17]  Marco Molinaro,et al.  Binary Partitions with Approximate Minimum Impurity , 2018, ICML.

[18]  Brian M. Kurkoski,et al.  Single-bit quantization of binary-input, continuous-output channels , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[19]  Thinh Nguyen,et al.  On the Capacities of Discrete Memoryless Thresholding Channels , 2018, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring).

[20]  Rudolf Mathar,et al.  Threshold optimization for capacity-achieving discrete input one-bit output quantization , 2013, 2013 IEEE International Symposium on Information Theory.

[21]  Gerald Matz,et al.  Channel-optimized vector quantization with mutual information as fidelity criterion , 2013, 2013 Asilomar Conference on Signals, Systems and Computers.

[22]  Joel G. Smith,et al.  The Information Capacity of Amplitude- and Variance-Constrained Scalar Gaussian Channels , 1971, Inf. Control..

[23]  Bobak Nazer,et al.  Information-distilling quantizers , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[24]  Brian M. Kurkoski,et al.  Quantization of Binary-Input Discrete Memoryless Channels , 2011, IEEE Transactions on Information Theory.

[25]  Amos Lapidoth,et al.  At Low SNR, Asymmetric Quantizers are Better , 2012, IEEE Transactions on Information Theory.

[26]  Lale Akarun,et al.  Adaptive methods for dithering color images , 1997, IEEE Trans. Image Process..

[27]  Joel Max,et al.  Quantizing for minimum distortion , 1960, IRE Trans. Inf. Theory.

[28]  Alexander Vardy,et al.  How to Construct Polar Codes , 2011, IEEE Transactions on Information Theory.

[29]  Ken-ichi Iwata,et al.  Suboptimal quantizer design for outputs of discrete memoryless channels with a finite-input alphabet , 2014, 2014 International Symposium on Information Theory and its Applications.