Adjustments for JPEG de-quantization coefficients

Summary form only given. In JPEG baseline compression algorithm, the quantization loss to DCT coefficients can be reduced, if we make use of the observation that the distributions of the DCT coefficients peak at zero and decrease exponentially. It means that the mid-point of a quantization interval, say m, used by the JPEG decoder to restore all coefficients falling within the interval, may be replaced by another point, say y, closer to zero but within the interval. If we model the distributions by /spl lambda/e/sup -/spl lambda/|x|/, where /spl lambda/>0 is a constant, derivable from some statistical parameters such as mean or variance, and we assume that the adjustment q=|m-y| should be chosen so that the sum of the loss to all coefficients falling within a quantization interval is zero for each interval, we can derive q=Q(e/sup /spl lambda/(Q-1)/+(Q-2)/2)/2e/sup /spl lambda/(Q-1)/-1)-1//spl lambda/ where Q is the quantizer step size. To test usefulness of the above idea, we implemented both approaches: (1) JPEG encoder computes /spl lambda/ for each DCT distribution and passes it as part of coded data to the decoder, and (2) JPEG decoder computes /spl lambda/ from the quantized DCT coefficient incrementally as it decodes its input. Through experiments, we found that none of these approaches resulted in much improvements, but found a better approach (OUR) which does not require any modeling of DCT. It uses /spl Sigma/(|m-y|*C)//spl Sigma/C to compute adjustments, where C is the number of coefficients falling within an interval, and the /spl Sigma/ is taken over all intervals not-containing the zero DCT. We also implemented the formulation developed by Ahumada et. al (see SID Digest, 1994) to compare it with the results of OUR approach. The comparison is shown in terms of the % reduction in the RMSE of the images.