Single-bit Quantization Capacity of Binary-input Continuous-output Channels

We consider a channel with discrete binary input X that is corrupted by a given continuous noise to produce a continuous-valued output Y. A quantizer is then used to quantize the continuous-valued output Y to the final binary output Z. The goal is to design an optimal quantizer Q* and also find the optimal input distribution p*(X) that maximizes the mutual information I(X; Z) between the binary input and the binary quantized output. A linear time complexity searching procedure is proposed. Based on the properties of the optimal quantizer and the optimal input distribution, we reduced the searching range that results in a faster implementation algorithm. Both theoretical and numerical results are provided to illustrate our method.

[1]  Brian M. Kurkoski,et al.  Quantization of Binary-Input Discrete Memoryless Channels , 2011, IEEE Transactions on Information Theory.

[2]  Richard E. Blahut,et al.  Computation of channel capacity and rate-distortion functions , 1972, IEEE Trans. Inf. Theory.

[3]  Richard A. Silverman On binary channels and their cascades , 1955, IRE Trans. Inf. Theory.

[4]  Amos Lapidoth,et al.  At Low SNR, Asymmetric Quantizers are Better , 2012, IEEE Transactions on Information Theory.

[5]  Thinh Nguyen,et al.  On Closed Form Capacities of Discrete Memoryless Channels , 2018, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring).

[6]  Ken-ichi Iwata,et al.  Quantizer design for outputs of binary-input discrete memoryless channels using SMAWK algorithm , 2014, 2014 IEEE International Symposium on Information Theory.

[7]  Brian M. Kurkoski,et al.  Decoding LDPC codes with mutual information-maximizing lookup tables , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[8]  Upamanyu Madhow,et al.  On the limits of communication with low-precision analog-to-digital conversion at the receiver , 2009, IEEE Transactions on Communications.

[9]  Rudolf Mathar,et al.  Optimum one-bit quantization , 2015, 2015 IEEE Information Theory Workshop - Fall (ITW).

[10]  Ira S. Moskowitz An approximation of the capacity of a simple channel , 2009, 2009 43rd Annual Conference on Information Sciences and Systems.

[11]  Brian M. Kurkoski,et al.  Single-bit quantization of binary-input, continuous-output channels , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[12]  Rudolf Mathar,et al.  Threshold optimization for capacity-achieving discrete input one-bit output quantization , 2013, 2013 IEEE International Symposium on Information Theory.

[13]  Gerald Matz,et al.  Channel-optimized vector quantization with mutual information as fidelity criterion , 2013, 2013 Asilomar Conference on Signals, Systems and Computers.

[14]  Ira S. Moskowitz Approximations for the capacity of binary input discrete memoryless channels , 2010, 2010 44th Annual Conference on Information Sciences and Systems (CISS).

[15]  Thinh Nguyen,et al.  On the Capacities of Discrete Memoryless Thresholding Channels , 2018, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring).

[16]  Ken-ichi Iwata,et al.  Suboptimal quantizer design for outputs of discrete memoryless channels with a finite-input alphabet , 2014, 2014 International Symposium on Information Theory and its Applications.

[17]  Wentu Song,et al.  Dynamic Programming for Discrete Memoryless Channel Quantization , 2019, ArXiv.

[18]  Brian M. Kurkoski,et al.  Finding the capacity of a quantized binary-input DMC , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.

[19]  H. Rumsey,et al.  Two Results On Binary-input Discrete Memoryless Channels , 1991, Proceedings. 1991 IEEE International Symposium on Information Theory.

[20]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[21]  Alexander Vardy,et al.  How to Construct Polar Codes , 2011, IEEE Transactions on Information Theory.