Appearance of High-Dynamic Range Images in a Uniform Lightness Space

High Dynamic Range (HDR) imaging techniques capture greater ranges of scene information, and attempt to convey that information with HDR displays. HDR imaging can improve the rendering of most scenes. Is the improved appearance caused by the increased luminance range and increased accuracy of display luminances? This paper examines the effect of increased luminance range on appearance in uniform color spaces. Our experiments show that increasing the luminance range of a transparent display by a factor of 500 has minimal effect on appearance. Two important image-dependent mechanisms are responsible for the small amount of change. First, intraocular scattered light, or veiling glare, limits the range of luminances on the retina. Second, human spatial processing, as seen in simultaneous contrast experiments, makes scatter limited retinal images appear to have higher contrast. These two image-dependent mechanisms work to counteract each other. Scatter acts to decreased the stimulus range on the retina, and spatial comparisons heighten apparent contrast. Both mechanisms are responsible for the observed small changes in the appearance range with large changes in luminance range. Introduction This paper studies the interrelationship of three aspects of Human Color Vision. In the 1860s, Maxwell made the first measurements of human color sensitivity[1]. Since then, color has been studied using psychophysics to measure color matches and color appearances. Since 1964, physiological measurements of cone receptors’ absorption spectra and electrophysiology have studied color vision at the cellular level [2-4]. Since 1971, High Dynamic Range (HDR) image capture and image processing have studied color in the computer for processing and reproduction [5, 6]. Psychophysics, physiology and HDR imaging are three parts of today’s color. However, they are often discussed independently without considering their inter-relationships. This paper studies the points of agreement, points of possible disagreement, and the framework of how to combine all three disciplines. Uniform color spaces – Psychophysics Human psychophysical data has two distinct data sets describing color. The first set is Color Matching Functions (CMF), which describe when the two halves of a circle, each with a different spectral composition, will match [1]. Such color identities can be predicted by converting spectral radiance measurements (380 to 700 nm) into the tristimulus values X, Y, Z using the 1931 CIE standard CMF functions[7]. Since psychophysics has no technique for measuring the peak wavelength sensitivities of L, M, and S cones, X, Y, Z are attempts at measuring linear transforms of cone sensitivities convolved with pre-retinal absorptions. The XYZ color matching functions can be used to predict whether patches will match, but cannot predict color appearance of those patches, because X, Y, Z cannot take into account human spatial processing of other stimuli in the field of view [8]. The second distinct set of color data describes uniform color spaces. The X, Y, Z values, from color matching experiments, form a 3D space. X, Y, Z space is not isotropic in appearance. Euclidean distance between two different locations in X, Y, Z space does not predict equal changes in appearance. Munsell [9] and others [10-11] asked observers to find samples that appeared to be equally spaced. A uniform color space is important in both theoretical and practical aspects of color theory and color applications. A uniform color space places observer data in an isotropic appearance 3D space. This information is critical in color theory because it provides the basis set for models of appearance [7, 12]. This information is critical in color reproduction because we can use it to distribute limited scene data (24 bit) so as to minimize quantization artifacts[13]. Munsell, Ostwald, OSA Uniform Color Space, NCS, and ColorCurve are examples of observations leading to uniform color spaces. Munsell is unique because it is has no external restrictions imposed on the observers [11]. ClELAB, CIELUV, CIECAM are examples of computational models of uniform color spaces. In this paper we use CIELAB. We realize that there are other more recent color space models and that there are issues of accuracy of uniformity associated with L*a*b* [11]. Nevertheless, L*a*b*, with its long history and great computational expediency, has great popularity and common usage. For these reasons, along with the fact that so many people are familiar with L*a*b*, we will use it to as the computational model that converts quanta catch by receptors into color appearance in this paper. Conversion of radiance to calculated position in L* a* b* uniform color space CIE 1931 standard colorimetry calculates the tristimulus X, Y, and Z triplet from radiance, by integrating the light spectrum coming to the eye with the x , y ,z color matching functions. Using CIE1976 standard color space, we calculate L*a*b* from X, Y, Z [14]. Lightness (L*) calculates appearances between white and black; a* calculates appearances between red and green-blue; b* calculates appearances between yellow and blue. The goal of these formulae is to be able to convert X, Y, Z to an isotropic color space, where all constant Euclidean distances have constant differences in appearances. The formulas for converting linear (radiance) spectral (XYZ) to L*a*b* are as follows: where Xn, Yn, Zn are the integrals for the reference white (radiant power reflected from a perfect diffuser in the viewing illuminant). For equations 2 and 3, when any of the ratios X/Xn, Y/Yn, Z/Zn is less, or equal to 0.008856, then: [(X/Xn)^1/3 ] is be replaced by [7.787*(X/Xn) 16/116], [(Y/Yn)^1/3 ] is be replaced by [7.787*(Y/Yn) +16/116], [(Z/Zn)^1/3 ] is be replaced by [7.787*(Z/Zn) +16/116] The way that L*a*b* handles High Dynamic Range (HDR) scenes has two components. The first is the use of cube root functions in both lightness and chroma. The second is use of other functions to force the calculation to zero asymptotes. In all cases, in evaluating this set of equations the first thing one does is to normalize the long-, middleand shortwave integrals to the maxima in each channel; (X/Xn), (Y/Yn), (Z/Z n). Human vision normalizes appearances to maxima in L, M and S channels[15-17]. This operation converts quanta catches to relative luminances. The next step in all calculations raises these normalized integrals to the power of 1/3, or cube root. This exponential step shapes the normalized X, Y, and Z to approach color space uniformity. In HDR terminology, this step scales the large range of possible radiances into a limited range of appearances. Figure 1 shows calculated Lightness L* (equations 1, 1a) vs. log luminance for a range covering six log units. Since we are concerned with the study of color over HDR images, we will plot luminance information as relative optical density (OD=log10[1/(Y/Yn)]. The vertical yellow line identifies the luminance that divides the regions used. On the right side of the yellow line equation (1) applies; on the left side equation (1a) applies. L* describes white as 100. On this graph L* = 100 plots at 0 relative optical density, or 100% (Y/Yn). When we reduces relative luminance by one half, then equation (1) reduce L* by 24%. In order to get L* = 50, we have to reduce luminance to 18%. The yellow line delimits the ranges of L* equations and falls at 9% in Lightness and 0.9% in luminance. There is no cube root function in equation 1a. It controls the shape of the asymptote to 0 lightness. L*=1 falls at OD = 3.0, or 0.1% luminance. In other words, L* suggests that 99% of usable (Y/Yn) information falls in 3 log units of scene dynamic range. Since a* and b* use the same compressive cube root function on (Y/Yn), (X/Xn), (Z/Zn), then L* a* b* evaluates the very large range of X, Y and Z in the scene, over a 3 log unit cube. Uniform color spacing is achieved by the cube root function. The calculations a* and b* uses a different function below 0.9% relative luminance. Figure 1 plots Equation 1 and 1a (circles) vs. relative optical density. The yellow line delimits the range of each equation. L* reaches a value of 1 at OD = 3.0. For comparison, the triangles plot a log10 function over

[1]  Mark D. Fairchild,et al.  Color Appearance Models , 1997, Computer Vision, A Reference Guide.

[2]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[3]  W. B. Marks,et al.  Visual Pigments of Single Primate Cones , 1964, Science.

[4]  J. Pokorny,et al.  Spectral sensitivity of the foveal cone photopigments between 400 and 500 nm , 1975, Vision Research.

[5]  J. McCann Rules for colour constancy , 1992, Ophthalmic & physiological optics : the journal of the British College of Ophthalmic Opticians.

[6]  D. B. Judd,et al.  Final Report of the O.S.A. Subcommittee on the Spacing of the Munsell Colors , 1943 .

[7]  John J. McCann,et al.  Visibility of Gradients and Low Spatial Frequency Sinusoids : Evidence for a Distance Constancy Mechanism , 2007 .

[8]  Pier Giuseppe Rossi,et al.  Art , 2000, The Lancet.

[9]  John J. McCann The history of spectral sensitivity functions for humans and imagers: 1861 to 2004 , 2005, IS&T/SPIE Electronic Imaging.

[10]  S. McKee,et al.  Quantitative studies in retinex theory a comparison between theoretical predictions and observer responses to the “color mondrian” experiments , 1976, Vision Research.

[11]  J. Dowling,et al.  Organization of the retina of the mudpuppy, Necturus maculosus. II. Intracellular recording. , 1969, Journal of neurophysiology.

[12]  Alessandro Rizzi,et al.  Veiling glare: the dynamic range limit of HDR images , 2007, Electronic Imaging.

[13]  E. Land,et al.  Lightness and retinex theory. , 1971, Journal of the Optical Society of America.

[14]  Raymond A. Eynard Color : theory and imaging systems , 1973 .

[15]  Alessandro Rizzi,et al.  Camera and visual veiling glare in HDR images , 2007 .

[16]  J. Mollon,et al.  Human visual pigments: microspectrophotometric results from the eyes of seven persons , 1983, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[17]  Alessandro Rizzi,et al.  Separating the effects of glare from simultaneous contrast , 2008, Electronic Imaging.

[18]  G. Wald,et al.  Visual Pigments in Single Rods and Cones of the Human Retina , 1964, Science.

[19]  John J. McCann,et al.  SPATIAL COMPARISONS : THE ANTIDOTE TO VEILING GLARE LIMITATIONS IN IMAGE CAPTURE AND DISPLAY , 2007 .

[20]  Alessandro Rizzi,et al.  The Spatial Properties of Contrast , 2003 .

[21]  W. PEDDIE,et al.  The Scientific Papers of James Clerk Maxwell , 1927, Nature.

[22]  J. McCann,et al.  Influence of intraocular scattered light on lightness-scaling experiments. , 1983, Journal of the Optical Society of America.

[23]  Alessandro Rizzi,et al.  Glare-limited appearances in HDR images , 2009 .