Source coding of the discrete Fourier transform

Distortion-rate theory is used to derive absolute performance bounds and encoding guidelines for direct fixed-rate minimum mean-square error data compression of the discrete Fourier transform (DFT) of a stationary real or circularly complex sequence. Both real-part-imaginary-part and magnitude-phase-angle encoding are treated. General source coding theorems are proved in order to justify using the optimal test channel transition probability distribution for allocating the information rate among the DFT coefficients and for calculating arbitrary performance measures on actual optimal codes. This technique has yielded a theoretical measure of the relative importance of phase angle over the magnitude in magnitude-phase-angle data compression. The result is that the phase angle must be encoded with 0.954 nats, or 1.37 bits, more rate than the magnitude for rates exceeding 3.0 nats per complex element. This result and the optimal error bounds are compared to empirical results for efficient quantization schemes.

[1]  Joel Max,et al.  Quantizing for minimum distortion , 1960, IRE Trans. Inf. Theory.

[2]  L. B. Lesem,et al.  The kinoform: a new wavefront reconstruction device , 1969 .

[3]  Thomas J. Goblick,et al.  Analog source digitization: A comparison of theory and practice (Corresp.) , 1967, IEEE Trans. Inf. Theory.

[4]  Robert M. Gray,et al.  Source coding theorems without the ergodic assumption , 1974, IEEE Trans. Inf. Theory.

[5]  P. Schultheiss,et al.  Block Quantization of Correlated Gaussian Random Variables , 1963 .

[6]  DAVID J. SAKRISON,et al.  The Rate Distortion Function for a Class of Sources , 1969, Inf. Control..

[7]  J. Crank Tables of Integrals , 1962 .

[8]  Judea Pearl,et al.  On coding and filtering stationary signals by discrete Fourier transforms (Corresp.) , 1973, IEEE Trans. Inf. Theory.

[9]  A. W. Lohmann,et al.  Computer-generated binary holograms , 1969 .

[10]  D. Brillinger Fourier analysis of stationary processes , 1974 .

[11]  P. Wintz Transform picture coding , 1972 .

[12]  Richard E. Blahut,et al.  Computation of channel capacity and rate-distortion functions , 1972, IEEE Trans. Inf. Theory.

[13]  J. Doob Stochastic processes , 1953 .

[14]  Robert B. Ash,et al.  Information Theory , 2020, The SAGE International Encyclopedia of Mass Media and Society.

[15]  A. Tescher The Role of Phase in Adaptive Image Coding , 1973 .

[16]  Jacob Ziv,et al.  Coding of sources with unknown statistics-II: Distortion relative to a fidelity criterion , 1972, IEEE Trans. Inf. Theory.

[17]  N. C. Gallagher,et al.  Method for Computing Kinoforms that Reduces Image Reconstruction Error. , 1973, Applied optics.

[18]  Andrei N. Kolmogorov,et al.  On the Shannon theory of information transmission in the case of continuous signals , 1956, IRE Trans. Inf. Theory.

[19]  William A. Pearlman A limit on optimum performance degradation in fixed-rate coding of the discrete Fourier transform (Corresp.) , 1976, IEEE Trans. Inf. Theory.

[20]  R. Gallager Information Theory and Reliable Communication , 1968 .

[21]  David J. Sakrison,et al.  The rate distortion function of a Gaussian process with a weighted square error criterion (Corresp.) , 1968, IEEE Trans. Inf. Theory.

[22]  Adrian Segall Bit allocation and encoding for vector sources , 1976, IEEE Trans. Inf. Theory.