Toward a perceptual video-quality metric
暂无分享,去创建一个
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
[1] A. Watson,et al. Quest: A Bayesian adaptive psychometric method , 1983, Perception & psychophysics.
[2] J A Solomon,et al. Model of visual contrast gain control and pattern masking. , 1997, Journal of the Optical Society of America. A, Optics, image science, and vision.
[3] J. M. Foley,et al. Human luminance pattern-vision mechanisms: masking experiments require a new model. , 1994, Journal of the Optical Society of America. A, Optics, image science, and vision.