Optimum Soft Decision Decoding With Graceful Degradation

We demonstrate that the complex decision variables eminating directly from the channel receiver may be efficiently used for optimum decoding. The data is protected by a linear block designed for the minimum mean-square error between the numerical representations of the input and output words. This criterion affords a natural graceful degradation property in the sense that if the channel noise randomly exceeds the performance capabilities of the code, any decoding errors are confined to the least significant positions. The channel model is described by a conditional probability density function which obeys a realistic additive assumption. A general optimum design procedure for selecting the encoding and decoding rules is presented. It involves the use of generalized Fourier transform coefficients of the channel density function. We examine the structure of the decoder and emphasize the special case where the channel is memoryless. In that case the optimum decoder uses a state space mechanization resembling the Viterbi decoder. Practical examples of coherent phase and frequency digital modulation formats are discussed. The general results are further extended to the situation where optimum symbol-by-symbol rules are employed.

[1]  Thomas R. Crimmins,et al.  Mean-square-error optimum coset leaders for group codes , 1970, IEEE Trans. Inf. Theory.

[2]  Jack K. Wolf,et al.  Efficient maximum likelihood decoding of linear block codes using a trellis , 1978, IEEE Trans. Inf. Theory.

[3]  I. M. Boyarinov,et al.  Linear unequal error protection codes , 1981, IEEE Trans. Inf. Theory.

[4]  Jack K. Wolf,et al.  On linear unequal error protection codes , 1967, IEEE Trans. Inf. Theory.

[5]  Larry A. Dunning,et al.  Optimal Encodings of Linear Block Codes for Unequal Error Protection , 1978, Inf. Control..

[6]  G. C. Clark,et al.  PCM Transmission with Minimum Mean-Square Error , 1966 .

[7]  Thomas R. Crimmins,et al.  Minimization of mean-square error for data transmitted via group codes , 1969, IEEE Trans. Inf. Theory.

[8]  G. Robert Redinbo Optimum symbol-by-symbol mean-square error channel coding , 1979, IEEE Trans. Inf. Theory.

[9]  Carlos R. P. Hartmann,et al.  An optimum symbol-by-symbol decoding rule for linear codes , 1976, IEEE Trans. Inf. Theory.

[10]  Robert J. McEliece,et al.  The Theory of Information and Coding , 1979 .

[11]  I. M. Jacobs,et al.  Principles of Communication Engineering , 1965 .

[12]  G. Robert Redinbo,et al.  The optimum mean-square estimate for decoding binary block codes , 1974, IEEE Trans. Inf. Theory.

[13]  Willis C. Gore,et al.  A class of cyclic unequal error protection codes (Corresp.) , 1972, IEEE Trans. Inf. Theory.

[14]  Thomas R. Crimmins On encoding and decoding maps for group codes (Corresp.) , 1976, IEEE Trans. Inf. Theory.

[15]  G. Robert Redinbo The Optimum Mean-Square Decoding of General Block Codes , 1976, Inf. Control..

[16]  Willis C. Gore,et al.  Cyclic codes with unequal error protection (Corresp.) , 1971, IEEE Trans. Inf. Theory.

[17]  G. Robert Redinbo Channel coding considerations for digital speech encoded by linear prediction , 1978, ICASSP.

[18]  Neal Zierler On the MacWilliams Identity , 1973, J. Comb. Theory, Ser. A.

[19]  John E. Markel,et al.  Linear Prediction of Speech , 1976, Communication and Cybernetics.

[20]  Athanasios Papoulis,et al.  Probability, Random Variables and Stochastic Processes , 1965 .

[21]  G. Robert Redinbo On Minimum Mean-Square Error Linear Block Codes When the Data have q-adic Weighting , 1974, Inf. Control..