The Perception of Illumination

This issue is the second of two devoted to the perception of illumination. The first issue focused on cast shadows. This second provides a further sampling of the rich array of questions posed by the nature of illumination, the perception of its direction, and its effects on perceived geometry and material. At first blush it may not seem to make sense to talk about the `perception of illumination'. After all, in the context of scene perception, illumination is a description of the light incident on the scene. Illuminance is the luminous flux per unit area incident at a point on an object's surface, but the eye registers the flux incident on the retina, rather than on an object. Yet, observers can easily make judgments, albeit not always precise, about the overall level of illumination, the cast of its colour, and its direction. And one could argue that the perception of illumination is no less indirect than the perception of other scene properties, such as object shape or reflectance. Further, at some level, knowledge of the intensity, colour, and direction of illumination is a prerequisite for determining shape and material properties. However, the idea of `perception of illumination' raises some tough questions. Is the visual system's knowledge of illumination implicit and specific to the various tasks of vision? For example, the information about illumination direction implied by depth from shadows may be different than that implied from shape from shading, or from direct estimates of its direction. Or is there an `illumination estimation module' in which, say, illumination intensity, colour, or direction on a scene is estimated, made explicit, and then be applied to a range of estimation tasks? The prevailing view probably favours the former. Although it is difficult to formulate well-defined experimental questions that can direct and settle the broad issue, one can ask how illumination information in an image is used or discounted for a variety of tasks. For example, what is the precision of the visual system's built-in knowledge of illumination direction? Koenderink, van Doorn, and Pont show conditions under which observers can estimate the elevation and azimuth of light direction with remarkable precision. McManus, Buckman, and Woolley provide evidence challenging the existence of a robust built-in prior assumption that the illumination is from the above and left. Jenkin (Heather and Michael), Dyde, and Harris ask the fundamental question whether, if knowledge of light direction is somehow built into the process of extracting shape from shading, what is the frame of reference in which direction is measured? They propose that the frame of reference is a mixture of information about body, visual, and gravitational frames. Can shading and reflectance be separated without an explicit estimation of illumination colour parameters? Olmos and Kingdom propose a solution that takes advantage of the constraint in natural images that shading and shadows tend to produce pure luminance variation, whereas surface reflectance changes produce alignment of chromatic and luminance variation. Also relevant to the constraints of natural vision, the fine-grain yet visible structure of many statistically uniform materials in the world, like grass, bark, or asphalt, produces richly textured colour images. It is important to tie together decades of research on colour discrimination of uniform patches, with the everyday task of discriminating materials. Te Pas and Koenderink take us in that direction by comparing human discrimination of uniform colours with coloured textures. Guest editorial Perception, 2004, volume 33, pages 1403 ^ 1404