There are two very different kinds of color constancy. One kind studies the ability of humans to be insensitive to the spectral composition of scene illumination. The second studies computer vision techniques for calculating the surface reflectances of objects in variable illumination. Camera-measured chromaticity has been used as a tool in computer vision scene analysis. This paper measures the ColorChecker test target in uniform illumination to verify the accuracy of scene capture. We identify the limitations of sRGB camera standards, the dynamic range limits of RAW scene captures, and the presence of camera veiling glare in areas darker than middle gray. Measurements of scene radiances and chromaticities with spot meters are much more accurate than camera capture due to scene-dependent veiling glare. Camera capture data must be verified by calibration. Introduction Studies of color constancy in complex scenes were originally done with simple images that allowed radiometric measurements of all image segments in the field of view. (Land, 1964: Land & McCann, 1971) McCann , McKee and Taylor (1976) introduced the field of Computational Color Constancy, using digital input arrays of 20 by 24 pixels. This size seems ridiculously small now, but it was very large for the time. They showed that: • 1. Observer color constancy matches correlated with Scaled Integrated Reflectance of Mondrian areas calculated using an algorithm that made spatial comparisons. • 2. The subtle changes in departures from perfect constancy were modeled well by cone crosstalk in the spatial comparisons. As digital imaging advanced it became possible to automatically capture arrays of millions of digital values from complex scenes. As well, Computational Color Constancy has split into two distinct domains: • Human Color Constancy (HCC) studies the ability of humans to be insensitive to the spectral composition of scene illumination. Its goal is to calculate the appearance of scene segments given only accurate radiances from each segment. No additional information, such as the radiance of the illumination is required, as in CIECAM models. The ground truth of HCC is the pyschophysical measurement of the appearance of each image segment. • Computer Vision Color Constancy (CVCC) studies techniques for estimating the surface reflectance of objects in variable illumination. Its goal is to separate the reflectance and illumination components from the input array of scene radiances. If successful, these algorithms use the information from the entire scene to find an object's surface reflectance. The ground truth of CVCC is the physical measurement of the surface reflectance of each image segment. Experiments measuring the human appearance of constant surface reflectances show considerable variation depending on scene content. Innumerable examples include: simultaneous contrast, color assimilation, and 3-D Mondrians (Albers, 1962; Parraman, et al., 2009, 2010). Computer vision's goal is to identify the surface, regardless of its appearance to humans. Thus, the two distinct kinds of Color Constancy do not share the same ground truth. They either have different fundamental mechanisms, or they have very different implementations. If they use the same underlying mechanism, then that mechanism would have to compute very different results. A single reflectance surface is seen in HCC to vary considerably with scene content, while the challenge to CVCC is to estimate the same constant reflectance in all scene contents. Image capture A common problem in both HCC and CVCC is the need for accurate data of scene radiances as the input for the models. The early spotmeter technique to measure simple targets was replaced by digital scans of high-dynamic-range film images (McCann, 1988); and more recently by multiple exposures using electronic imaging. Papers by Debevec and Malik (1997), Mitsunaga and Nayar (1999), Robertson et al. (2003), and Grossberg and Nayar (2004) propose calibration methods for standard digital images. Funt & Shi (2010) describe the advantages of using DCRAW software to extract RAW camera data that is linear and closer to the camera sensor's response. Xiong et al. (2012) and Kim et al. (2012) describe techniques for converting standard images to RAW for further processing. The common thread is that these papers attempt to remove the camera response functions from its digital data to measure accurate scene radiances. Surface reflectance by first finding illumination Helmholtz (1924) introduced the idea that constancy could be explained by finding the illumination first. If that were accomplished by some means, the quanta catch of the receptors divided by the quanta catch from the illumination equals a measure of surface reflectance. For Human Color Constancy (HCC) that approach could provide an alternative partial explanation of McCann et al.(1976), but not subsequent vision measurements. (McCann, 2012, chapter 27.5) For CVCC, that approach works within strict bounds imposed on the illumination. Obviously, it can work perfectly in illumination that is both spatially and spectrally uniform. Under these conditions there is a singular description of illumination falling on all objects in the scene. Real scenes do not have uniform illumination. One CVCC approach assumes that the illumination is spectrally uniform, namely the illuminant has only a single spectrum falling on all objects, but variable in intensity. In such uniform spectral illuminants we can use chromaticity a measure of spectral composition to describe any intensity of that spectra. However, if the scene contains more than one spectral illuminant, such as sunlight and skylight, or colored reflections from a colored surface, Page 1 of 9
[1]
Alessandro Rizzi,et al.
Glare-limited appearances in HDR images
,
2009
.
[2]
J. Albers,et al.
Interaction of Color
,
1971
.
[3]
Shree K. Nayar,et al.
Modeling the space of camera response functions
,
2004,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[4]
Alessandro Rizzi,et al.
Artist's colour rendering of HDR scenes in 3D Mondrian colour-constancy experiments
,
2010,
Electronic Imaging.
[5]
Stephen Lin,et al.
A New In-Camera Imaging Model for Color Computer Vision and Its Application
,
2012,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6]
Marc Ebner,et al.
Color Constancy
,
2007,
Computer Vision, A Reference Guide.
[7]
Erik Reinhard,et al.
High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting (The Morgan Kaufmann Series in Computer Graphics)
,
2005
.
[8]
Brian V. Funt,et al.
Is Machine Colour Constancy Good Enough?
,
1998,
ECCV.
[9]
Trevor Darrell,et al.
From pixels to physics: Probabilistic color de-rendering
,
2012,
2012 IEEE Conference on Computer Vision and Pattern Recognition.
[10]
Shree K. Nayar,et al.
Radiometric self calibration
,
1999,
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).
[11]
Thomas Sutton,et al.
The Dictionary of Photography
,
1902,
Nature.
[12]
E. Land,et al.
Lightness and retinex theory.
,
1971,
Journal of the Optical Society of America.
[13]
J. W. Gibbs,et al.
Scientific Papers
,
2002,
Molecular Imaging and Biology.
[14]
Alessandro Rizzi,et al.
Color appearance and color rendering of HDR scenes: an experiment
,
2009,
Electronic Imaging.
[15]
John J. McCann,et al.
Accurate Information vs. Looks Good: Scientific vs. Preferred Rendering
,
2012,
CGIV.
[16]
Andrew C. Gallagher.
The Art and Science of HDR Imaging
,
2012,
J. Electronic Imaging.
[17]
S. McKee,et al.
Quantitative studies in retinex theory a comparison between theoretical predictions and observer responses to the “color mondrian” experiments
,
1976,
Vision Research.
[18]
Klaus-Dieter Kuhnert,et al.
Auto White Balance Using the Coincidence of Chromaticity Histograms
,
2012,
2012 Eighth International Conference on Signal Image Technology and Internet Based Systems.
[19]
Graham D. Finlayson,et al.
Color by Correlation
,
1997,
CIC.
[20]
D. Pascale.
RGB coordinates of the Macbeth ColorChecker
,
2006
.
[21]
J. Maxwell.
The Scientific Papers of James Clerk Maxwell
,
2009
.
[22]
Jitendra Malik,et al.
Recovering high dynamic range radiance maps from photographs
,
1997,
SIGGRAPH '08.
[23]
Joost van de Weijer,et al.
Color in Computer Vision
,
2008
.
[24]
Alessandro Rizzi,et al.
Glare‐limited appearances in HDR images
,
2007,
CIC.
[25]
T. Martin McGinnity,et al.
Chromaticity Space for Illuminant Invariant Recognition
,
2012,
IEEE Transactions on Image Processing.
[26]
Robert L. Stevenson,et al.
Estimation-theoretic approach to dynamic range enhancement using multiple exposures
,
2003,
J. Electronic Imaging.
[27]
Alessandro Rizzi,et al.
Camera and visual veiling glare in HDR images
,
2007
.
[28]
Gabriela Koreisová,et al.
Scientific Papers
,
1997,
Nature.
[29]
Ruigang Yang,et al.
A Uniform Framework for Estimating Illumination Chromaticity, Correspondence, and Specular Reflection
,
2011,
IEEE Transactions on Image Processing.
[30]
Li Yao.
Estimation Illumination Chromaticity
,
2008,
2008 Second International Symposium on Intelligent Information Technology Application.
[31]
Lilong Shi,et al.
The Rehabilitation of MaxRGB
,
2010,
CIC.
[32]
Erik Reinhard,et al.
High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting
,
2010
.