Optimal quantizer structure for binary discrete input continuous output channels under an arbitrary quantized-output constraint

Given a channel having binary input X = (x_1, x_2) having the probability distribution p_X = (p_{x_1}, p_{x_2}) that is corrupted by a continuous noise to produce a continuous output y \in Y = R. For a given conditional distribution p(y|x_1) = \phi_1(y) and p(y|x_2) = \phi_2(y), one wants to quantize the continuous output y back to the final discrete output Z = (z_1, z_2, ..., z_N) with N \leq 2 such that the mutual information between input and quantized-output I(X; Z) is maximized while the probability of the quantized-output p_Z = (p_{z_1}, p_{z_2}, ..., p_{z_N}) has to satisfy a certain constraint. Consider a new variable r_y=p_{x_1}\phi_1(y)/ (p_{x_1}\phi_1(y)+p_{x_2}\phi_2(y)), we show that the optimal quantizer has a structure of convex cells in the new variable r_y. Based on the convex cells property of the optimal quantizers, a fast algorithm is proposed to find the global optimal quantizer in a polynomial time complexity.

[1]  Thinh Nguyen,et al.  Minimizing Impurity Partition Under Constraints , 2019, ArXiv.

[2]  Philip A. Chou,et al.  Entropy-constrained vector quantization , 1989, IEEE Trans. Acoust. Speech Signal Process..

[3]  Alexander Vardy,et al.  How to Construct Polar Codes , 2011, IEEE Transactions on Information Theory.

[4]  Ken-ichi Iwata,et al.  Quantizer design for outputs of binary-input discrete memoryless channels using SMAWK algorithm , 2014, 2014 IEEE International Symposium on Information Theory.

[5]  Ken-ichi Iwata,et al.  Suboptimal quantizer design for outputs of discrete memoryless channels with a finite-input alphabet , 2014, 2014 International Symposium on Information Theory and its Applications.

[6]  Wentu Song,et al.  Dynamic Programming for Discrete Memoryless Channel Quantization , 2019, ArXiv.

[7]  Allen Gersho,et al.  Vector quantization and signal compression , 1991, The Kluwer international series in engineering and computer science.

[8]  Haizhou Wang,et al.  Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming , 2011, R J..

[9]  David J. Schwab,et al.  The Deterministic Information Bottleneck , 2015, Neural Computation.

[10]  Rudolf Mathar,et al.  Threshold optimization for capacity-achieving discrete input one-bit output quantization , 2013, 2013 IEEE International Symposium on Information Theory.

[11]  Gerald Matz,et al.  Channel-optimized vector quantization with mutual information as fidelity criterion , 2013, 2013 Asilomar Conference on Signals, Systems and Computers.

[12]  T. Linder,et al.  On the structure of entropy-constrained scalar quantizers , 2001, Proceedings. 2001 IEEE International Symposium on Information Theory (IEEE Cat. No.01CH37252).

[13]  David L. Neuhoff,et al.  Performance of low rate entropy-constrained scalar quantizers , 2004, International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings..

[14]  Brian M. Kurkoski,et al.  Decoding LDPC codes with mutual information-maximizing lookup tables , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[15]  Thinh Nguyen,et al.  On the Capacities of Discrete Memoryless Thresholding Channels , 2018, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring).

[16]  Mattias Nilsson,et al.  On Entropy-Constrained Vector Quantization using , 2008 .

[17]  Brian M. Kurkoski,et al.  Low-complexity quantization of discrete memoryless channels , 2016, 2016 International Symposium on Information Theory and Its Applications (ISITA).

[18]  Brian M. Kurkoski,et al.  Quantization of Binary-Input Discrete Memoryless Channels , 2011, IEEE Transactions on Information Theory.

[19]  Amos Lapidoth,et al.  At Low SNR, Asymmetric Quantizers are Better , 2012, IEEE Transactions on Information Theory.

[20]  Brian M. Kurkoski,et al.  Single-bit quantization of binary-input, continuous-output channels , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).