All computer-generated images are displayed by intensifying pixels on a display. Each pixel, which extends spatially and temporally, has two relevant properties: its position and its intensity profile. The interaction between position and image quality, which can produce the aliasing of high frequencies onto low ones, has been extensively explored and very successful treatments for the resulting artifacts are well known. There is little practical understanding of the interaction between intensity profile and image quality.
This thesis examines the interaction between pixel intensity profiles and image quality, taking into account the spatiotemporal characteristics of the human visual system. The interaction is examined by considering the general problem; given a device with given pixel locations and intensity profiles, what is the set of intensity values that best represents a given spatiotemporal image. Within the restricted but practical case in which pixel locations are periodic in space and time and pixel intensity profiles are identical, two solution techniques are explored. One directly minimizes discrepancies between the desired image and an image generated by a display device. The other chooses pixel intensities to minimize differences in the Fourier domain, with the differences weighted by the corresponding sensitivities of the human visual system.
These two techniques are explored in detail for pixels with an exponential temporal intensity profile and a Gaussian spatial profile. Each method is examined in several different norms with pixel intensities constrained and unconstrained. (Constraints are relevant because the contrast possible using unconstrained fitting is very restricted for some devices.) These results are calculated assuming temporal degrees of freedom independent of the spatial ones. Under the same assumption, the spatial intensity profiles were examined. Throughout these calculations, algorithms that manipulate circulant matrices provide a computationally effective means for determining pixel intensities. When spatial and temporal degrees of freedom are taken together these algorithms can no longer be used because the spatial and temporal responses of the human visual system are not separable. Since solutions require use of less efficient numerical methods, the emphasis in that part of the thesis is on differences between the unseparated solutions and those that are produced using separable approximations.
[1]
Don P. Mitchell,et al.
Generating antialiased images at low sampling densities
,
1987,
SIGGRAPH.
[2]
A.V. Oppenheim,et al.
The importance of phase in signals
,
1980,
Proceedings of the IEEE.
[3]
H. Stark.
Diffraction patterns of nonoverlapping circular grains
,
1977
.
[4]
H. Wilson,et al.
Modified line-element theory for spatial-frequency and width discrimination.
,
1984,
Journal of the Optical Society of America. A, Optics and image science.
[5]
Edmund Taylor Whittaker.
XVIII.—On the Functions which are represented by the Expansions of the Interpolation-Theory
,
1915
.
[6]
Arun N. Netravali,et al.
Reconstruction filters in computer-graphics
,
1988,
SIGGRAPH.
[7]
J. Robson,et al.
Spatial-frequency channels in human vision.
,
1971,
Journal of the Optical Society of America.
[8]
S. McKee,et al.
Visual acuity in the presence of retinal-image motion.
,
1975,
Journal of the Optical Society of America.
[9]
J. Robson.
Spatial and Temporal Contrast-Sensitivity Functions of the Visual System
,
1966
.
[10]
J. Yellott.
Spectral analysis of spatial sampling by photoreceptors: Topological disorder prevents aliasing
,
1982,
Vision Research.
[11]
John E. Warnock,et al.
The display of characters using gray level sample arrays
,
1980,
SIGGRAPH '80.
[12]
V. Smirnov,et al.
Linear algebra and group theory
,
1961
.