What is a pixel?

The total list of processing required to view a pixel includes antialiasing, offset sampling, color space projection, reconstruction filter compensation, compositing, gamma correction, and quantization and dithering. If we look at all these operations we can see a pattern: Almost all of them throw away information. When we filter out high frequencies, quantize intensities into bins, project a continuous color spectrum into three numbers, and represent geometric edges with a single transparency value we can see that an ordinary-hardware pixel, either refreshed on the screen or stored in a file, is simply a bad data compression technique. Any rendering algorithm or image processing operation that converts data to pixels generally loses information about the original data that it uses as input. A few polygons become thousands of pixels; a high-resolution image becomes a low-resolution image. Conversion to pixels for viewing purposes used to be a slow operation, but with faster processors we no longer need to do the image generation offline for speed purposes. We can recalculate the image whenever we need to look at it.

[1]  Tom Duff,et al.  Compositing digital images , 1984, SIGGRAPH.

[2]  James F. Blinn,et al.  Compositing. 1. Theory , 1994, IEEE Computer Graphics and Applications.

[3]  James F. Blinn,et al.  A Ghost in a Snowstorm , 1998, IEEE Computer Graphics and Applications.

[4]  Mark D. Fairchild,et al.  Full-Spectral Color Calculations in Realistic Image Synthesis , 1999, IEEE Computer Graphics and Applications.

[5]  Roy Hall Comparing Spectral Color Computation Methods , 1999, IEEE Computer Graphics and Applications.