Structure of Optimal Quantizer for Binary-Input Continuous-Output Channels with Output Constraints

In this paper, we consider a channel whose the input is a binary random source X ∈ {x<inf>1</inf>,x<inf>2</inf>} with the probability mass function (pmf) p<inf>X</inf> = [p<inf>x1</inf>,p<inf>x2</inf>] and the output is a continuous random variable Y ∈ R as a result of a continuous noise, characterized by the channel conditional densities p<inf>y|x1</inf> = ϕ<inf>1</inf>(y) and p<inf>y|x2</inf> = ϕ<inf>2</inf>(y). A quantizer Q is used to map Y back to a discrete set Z ∈ {z<inf>1</inf>,z<inf>2</inf>,...,z<inf>N</inf>}. To retain most amount of information about X, an optimal Q is one that maximizes I(X;Z). On the other hand, our goal is not only to recover X but also ensure that p<inf>Z</inf> = [p<inf>z1</inf>,p<inf>z2</inf>,...,p<inf>zN</inf>] satisfies a certain constraint. In particular, we are interested in designing a quantizer that maximizes βI(X;Z)−C(p<inf>Z</inf>) where β is a tradeoff parameter and C(p<inf>Z</inf>) is an arbitrary cost function of p<inf>Z</inf>. Let the posterior probability ${p_{{x_1}\mid y}} = {r_y} = \frac{{{p_{{x_1}}}{\phi _1}(y)}}{{{p_{{x_1}}}{\phi _1}(y) + {p_{{x_2}}}{\phi _2}(y)}}$, our result shows that the structure of the optimal quantizer separates r<inf>y</inf> into convex cells. In other words, the optimal quantizer has the form: ${Q^{\ast}}\left( {{r_y}} \right) = {z_i}$, if $a_{i - 1}^{\ast} \leq {r_y} < a_i^{\ast}$ for some optimal thresholds $a_0^{\ast} = 0 < a_1^{\ast} < a_2^{\ast} < \cdots < a_{N - 1}^{\ast} < a_N^{\ast} = 1$. Based on this optimal structure, we describe some fast algorithms for determining the optimal quantizers.

[1]  Alok Aggarwal,et al.  Geometric applications of a matrix-searching algorithm , 1987, SCG '86.

[2]  Thinh Nguyen,et al.  On the Capacities of Discrete Memoryless Thresholding Channels , 2018, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring).

[3]  Mattias Nilsson,et al.  On Entropy-Constrained Vector Quantization using , 2008 .

[4]  Yu-Jung Chu,et al.  A New Fast Algorithm for Finding Capacity of Discrete Memoryless Thresholding Channels , 2020, 2020 International Conference on Computing, Networking and Communications (ICNC).

[5]  Yi Hong,et al.  Quantization of binary input DMC at optimal mutual information using constrained shortest path problem , 2015, 2015 22nd International Conference on Telecommunications (ICT).

[6]  Allen Gersho,et al.  Vector quantization and signal compression , 1991, The Kluwer international series in engineering and computer science.

[7]  Ken-ichi Iwata,et al.  Quantizer design for outputs of binary-input discrete memoryless channels using SMAWK algorithm , 2014, 2014 IEEE International Symposium on Information Theory.

[8]  Michelle Effros,et al.  Quantization as Histogram Segmentation: Optimal Scalar Quantizer Design in Network Systems , 2008, IEEE Transactions on Information Theory.

[9]  T. Linder,et al.  On the structure of entropy-constrained scalar quantizers , 2001, Proceedings. 2001 IEEE International Symposium on Information Theory (IEEE Cat. No.01CH37252).

[10]  Thuan Nguyen,et al.  A Linear Time Partitioning Algorithm for Frequency Weighted Impurity Functions , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[11]  Alexander Vardy,et al.  How to Construct Polar Codes , 2011, IEEE Transactions on Information Theory.

[12]  David L. Neuhoff,et al.  Performance of low rate entropy-constrained scalar quantizers , 2004, International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings..

[13]  Haizhou Wang,et al.  Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming , 2011, R J..

[14]  Brian M. Kurkoski,et al.  Decoding LDPC codes with mutual information-maximizing lookup tables , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[15]  Brian M. Kurkoski,et al.  Quantization of Binary-Input Discrete Memoryless Channels , 2011, IEEE Transactions on Information Theory.

[16]  Amos Lapidoth,et al.  At Low SNR, Asymmetric Quantizers are Better , 2012, IEEE Transactions on Information Theory.

[17]  Brian M. Kurkoski,et al.  Low-complexity quantization of discrete memoryless channels , 2016, 2016 International Symposium on Information Theory and Its Applications (ISITA).

[18]  V. D. Pietra,et al.  Minimum Impurity Partitions , 1992 .

[19]  Ken-ichi Iwata,et al.  Suboptimal quantizer design for outputs of discrete memoryless channels with a finite-input alphabet , 2014, 2014 International Symposium on Information Theory and its Applications.

[20]  Thinh Nguyen,et al.  Minimizing Impurity Partition Under Constraints , 2019, ArXiv.

[21]  Rudolf Mathar,et al.  Threshold optimization for capacity-achieving discrete input one-bit output quantization , 2013, 2013 IEEE International Symposium on Information Theory.

[22]  David J. Schwab,et al.  The Deterministic Information Bottleneck , 2015, Neural Computation.

[23]  Inderjit S. Dhillon,et al.  Clustering with Bregman Divergences , 2005, J. Mach. Learn. Res..

[24]  Brian M. Kurkoski,et al.  Single-bit quantization of binary-input, continuous-output channels , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[25]  Wentu Song,et al.  Dynamic Programming for Discrete Memoryless Channel Quantization , 2019, ArXiv.

[26]  Thuan Nguyen,et al.  Single-bit Quantization Capacity of Binary-input Continuous-output Channels , 2020, ArXiv.