Extended Corrected-Moments Illumination Estimation

A remarkably simple color constancy method was recently developed, based in essence on the Gray-Edge method, i.e., the assumption that the mean of color-gradients in a scene (or colors themselves, in a Gray-World setting) are close to being achromatic. However this new method for illuminant estimation explicitly includes the important notions that (1) we cannot hope to recover illuminant strength, but only chromaticity; and (2) that a polynomial regression from image moment vectors to chromaticity triples should be based not on polynomials but instead on the roots of polynomials, in order to release the regression from absolute units of lighting. In this paper we extend these new image moments in several ways: by replacing the standard expectation value mean used in the moments by a Minkowski p-norm; by going over to a float value for the parameter p and carrying out a nonlinear optimization on this parameter; by considering a different expectation value, generated by using the geometric mean. We show that these strategies can drive down the median and maximum error of illumination estimates. Introduction Colors in images result from the combination of illumination, surface reflection, and camera sensors plus the effects of the imaging and display pipeline [13]. In general, the human visual system is capable of filtering out the effects of the illumination source when observing a scene – a psychophysical phenomenon denoted color constancy (CC). In many computer vision or image processing problems, researchers have often made use of some variety of CC as a pre-processing step to either generate data that is relatively invariant to the illuminant, or on the other hand to ensure that the captured color of the scene changes appropriately for different illumination conditions. The computer science goal in the color constancy task is to estimate the illumination, or at least the chromaticity – color without magnitude. Remarkably, the recent Corrected Moments illumination estimation due to Finlayson [6] does overall best in terms of illumination accuracy, and moreover produces results that reduce the maximum error in estimation. The latter property is important and desired: a camera manufacturer wishes to generate no images at all that produce strange colors, in any situation. The objective we aim at, here, falls within the scenario of a camera company (or smartphone producer) providing a CC algorithm with their equipment. In this sense, a training phase would be acceptable since the resulting algorithm adheres only to a single camera – the images we consider are not “unsourced” in the sense that come from the web or other unknown source: instead, they come from a known camera. In this paper we re-examine Finlayson’s Corrected Moments method [6] with a view to simple extensions which we find further improve the illumination estimates delivered by the method. These simple extensions do not greatly affect the good timeand space-complexity of the method, yet yield better results, thus surpassing the best results to date. Here we extend the Corrected-Moments approach in three ways. Specifically, we begin by incorporating Minkowski-norm moments into Corrected-Moments illumination estimation. Then we show how to incorporate the Zeta-Image [5] approach to illuminant estimation within the Corrected-Moments method. Finally we devise a float-parameter optimization scheme to deliver the best performance for each dataset situation. The paper is organized as follows. In Section [Related Work] we discuss related works that form the scaffold for the present work. In Section [Corrected Moments] we review the corrected moments approach proposed by [6]. In the Section [Minkowski Norm and Geometric Mean in Corrected Moments Method] we propose novel moments to be used in the Corrected-Moments approach, plus a new optimization scheme. We compare results for the proposed moments with results obtained previously by exhaustively considering different estimators applied to 4 standard datasets. Related Work Gray-World and Gray-Edge In experiments and tables of results below, note that we compare results with the best to date, state-of-the-art methods. However, in fact the method in [6] is based on very simple algorithms, so we begin the discussion with these. The simplest illumination estimation algorithm is the Gray-World algorithm [3], which assumes that the average reflectance in a scene is achromatic. Thus the illumination color may be estimated by simply taking the global average over pixel values. More specifically, in each color channel k = 1..3, the gray-world estimate of light color is given by E(Rk), where E(·) is expectation value and Rk is RGB color. That is, Gray-World states that E(Rk) = 1 N ∑ N i=1 R i k, with N being the number of pixels. Intuitively, Gray-World will obviously fail if a scene is insufficiently colorful. For example, an image of a gold coin that takes up most of the pixels will generate a very poor illumination estimate; and if we move the white point to R = G = B = 1, or use a more careful white-point camera balance (see, e.g., [11]) then our image will likely end up containing a coin that looks gray rather than gold. A more recent but almost as simple algorithm is the GrayEdge method, which asserts that the average of reflectance differences in a scene is achromatic [16]. With this assumption, the illumination color is estimated by computing the average color derivative in the image, E(||∇Rk||), where ∇ is the gradient field pair {∂/∂x,∂/∂y}. The Gray-Edge assumption originated from the empirical observation that the color derivative probability distribution for images forms a relatively regular, ellipsoid-like shape, with the long axis coinciding with the illumination color [16]. The expectation value for the kth color channel is then estimated by ĉk = √√√√ N ∑ i=1 ∣∣∣∣∂Rik ∂x ∣∣∣∣ 2 + ∣∣∣∣∂Rik ∂y ∣∣∣∣ 2

[1]  Graham D. Finlayson,et al.  Shades of Gray and Colour Constancy , 2004, CIC.

[2]  Joost van de Weijer,et al.  Computational Color Constancy: Survey and Experiments , 2011, IEEE Transactions on Image Processing.

[3]  G. Buchsbaum A spatial processor model for object colour perception , 1980 .

[4]  Ze-Nian Li,et al.  Fundamentals of Multimedia , 2014, Texts in Computer Science.

[5]  A. Hurlbert,et al.  Perception of three-dimensional shape influences colour perception through mutual illumination , 1999, Nature.

[6]  W.E. Snyder,et al.  Color image processing pipeline , 2005, IEEE Signal Processing Magazine.

[7]  Graham D. Finlayson,et al.  Corrected-Moment Illuminant Estimation , 2013, 2013 IEEE International Conference on Computer Vision.

[8]  Brian V. Funt,et al.  A data set for color research , 2002 .

[9]  Joost van de Weijer,et al.  Author Manuscript, Published in "ieee Transactions on Image Processing Edge-based Color Constancy , 2022 .

[10]  Mark S. Drew,et al.  The Zeta-image, illuminant estimation, and specularity manipulation , 2014, Comput. Vis. Image Underst..

[11]  Graham D. Finlayson,et al.  Root-Polynomial Colour Correction , 2011, Color Imaging Conference.

[12]  Mark S. Drew,et al.  Exemplar-Based Colour Constancy , 2012, BMVC.

[13]  Keigo Hirakawa,et al.  Color Constancy with Spatio-Spectral Statistics , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Andrew Blake,et al.  Bayesian color constancy revisited , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Mark S. Drew,et al.  Colour Constancy from Both Sides of the Shadow Edge , 2013, 2013 IEEE International Conference on Computer Vision Workshops.