On the Uniqueness of Binary Quantizers for Maximizing Mutual Information

We consider a channel with a binary input X being corrupted by a continuous-valued noise that results in a continuous-valued output Y. An optimal binary quantizer is used to quantize the continuous-valued output Y to the final binary output Z to maximize the mutual information I(X; Z). We show that when the ratio of the channel conditional density r(y) = P(Y=y|X=0)/ P(Y =y|X=1) is a strictly increasing/decreasing function of y, then a quantizer having a single threshold can maximize mutual information. Furthermore, we show that an optimal quantizer (possibly with multiple thresholds) is the one with the thresholding vector whose elements are all the solutions of r(y) = r* for some constant r* > 0. Interestingly, the optimal constant r* is unique. This uniqueness property allows for fast algorithmic implementation such as a bisection algorithm to find the optimal quantizer. Our results also confirm some previous results using alternative elementary proofs. We show some numerical examples of applying our results to channels with additive Gaussian noises.

[1]  Ken-ichi Iwata,et al.  Quantizer design for outputs of binary-input discrete memoryless channels using SMAWK algorithm , 2014, 2014 IEEE International Symposium on Information Theory.

[2]  Ferdinando Cicalese,et al.  New results on information theoretic clustering , 2019, ICML.

[3]  Thinh Nguyen,et al.  On the Capacities of Discrete Memoryless Thresholding Channels , 2018, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring).

[4]  Jonathan R. M. Hosking,et al.  Partitioning Nominal Attributes in Decision Trees , 1999, Data Mining and Knowledge Discovery.

[5]  Thinh Nguyen,et al.  On Closed Form Capacities of Discrete Memoryless Channels , 2018, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring).

[6]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[7]  V. D. Pietra,et al.  Minimum Impurity Partitions , 1992 .

[8]  Brian M. Kurkoski,et al.  Single-bit quantization of binary-input, continuous-output channels , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[9]  Brian M. Kurkoski,et al.  Low-complexity quantization of discrete memoryless channels , 2016, 2016 International Symposium on Information Theory and Its Applications (ISITA).

[10]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[11]  Joel Max,et al.  Quantizing for minimum distortion , 1960, IRE Trans. Inf. Theory.

[12]  Rudolf Mathar,et al.  Threshold optimization for capacity-achieving discrete input one-bit output quantization , 2013, 2013 IEEE International Symposium on Information Theory.

[13]  Gerald Matz,et al.  Channel-optimized vector quantization with mutual information as fidelity criterion , 2013, 2013 Asilomar Conference on Signals, Systems and Computers.

[14]  Joel G. Smith,et al.  The Information Capacity of Amplitude- and Variance-Constrained Scalar Gaussian Channels , 1971, Inf. Control..

[15]  Wentu Song,et al.  Dynamic Programming for Discrete Memoryless Channel Quantization , 2019, ArXiv.

[16]  Samuel D. Conte,et al.  Elementary Numerical Analysis: An Algorithmic Approach , 1975 .

[17]  Ken-ichi Iwata,et al.  Suboptimal quantizer design for outputs of discrete memoryless channels with a finite-input alphabet , 2014, 2014 International Symposium on Information Theory and its Applications.

[18]  Rudolf Mathar,et al.  Optimum one-bit quantization , 2015, 2015 IEEE Information Theory Workshop - Fall (ITW).

[19]  Morris Goldberg,et al.  Image Compression Using Adaptive Vector Quantization , 1986, IEEE Trans. Commun..

[20]  Ming Yang,et al.  Compressing Deep Convolutional Networks using Vector Quantization , 2014, ArXiv.

[21]  Richard D. Wesel,et al.  Soft Information for LDPC Decoding in Flash: Mutual-Information Optimized Quantization , 2011, 2011 IEEE Global Telecommunications Conference - GLOBECOM 2011.

[22]  Brian M. Kurkoski,et al.  Decoding LDPC codes with mutual information-maximizing lookup tables , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[23]  Brian M. Kurkoski,et al.  Quantization of Binary-Input Discrete Memoryless Channels , 2011, IEEE Transactions on Information Theory.

[24]  Amos Lapidoth,et al.  At Low SNR, Asymmetric Quantizers are Better , 2012, IEEE Transactions on Information Theory.

[25]  Lale Akarun,et al.  Adaptive methods for dithering color images , 1997, IEEE Trans. Image Process..

[26]  Marco Molinaro,et al.  Binary Partitions with Approximate Minimum Impurity , 2018, ICML.

[27]  Upamanyu Madhow,et al.  On the limits of communication with low-precision analog-to-digital conversion at the receiver , 2009, IEEE Transactions on Communications.

[28]  Alexander Vardy,et al.  How to Construct Polar Codes , 2011, IEEE Transactions on Information Theory.

[29]  Bobak Nazer,et al.  Information-distilling quantizers , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).