Research in early (low-level) vision, tooth for machines and humans, has
traditionally been based on the study of idealized images or image patches such as step edges, gratings, flat fields, and Mondrians. Real images, however, exhibit much richer and more complex structure, whose nature is determined by the physical and geometric properties of illumination, reflection, and imaging. By understanding these physical relationships, a new kind of early vision analysis is made possible. In this paper, we describe a progression of models of imaging physics that present a much more complex and realistic set of image relationships than are commonly assumed in early vision research. We begin with the Dichromatic Reflection Model, which describes how highlights and color are related in images of dielectrics such as plastic and painted surfaces. This gives rise to a mathematical relationship in color space to separate highlights from object color. Perceptions of shape, surface roughness/texture, and illumination color are readily derived from this analysis. We next show how this can be extended to images of several objects, by deriving local color variation relationships from the basic model. The resulting method for color image analysis has been successfully applied in machine vision experiments in our laboratory. Yet another extension is to account for inter-reflection among multiple objects.
We have derived a simple model of color inter-reflection that accounts for the basic phenomena, and report on this model and how we are applying it. In general, the concept of illumination for vision should account for the entire "illumination environment", rather than being restricted to a single light source. This work shows that the basic physical relationships give rise to very structured image properties, which can be a more valid basis for early vision than the traditional idealized image patterns.
[1]
David Marr,et al.
Representing Visual Information
,
1977
.
[2]
Robert L. Cook,et al.
A Reflectance Model for Computer Graphics
,
1987,
TOGS.
[3]
R. Gershon.
The use of color in computational vision
,
1987
.
[4]
K. Torrance,et al.
Theory for off-specular reflection from roughened surfaces
,
1967
.
[5]
Steven A. Shafer,et al.
Obtaining accurate color images for machine-vision research
,
1990,
Electronic Imaging.
[6]
Michael H. Brill.
Object-Based Segmentation And Color Recognition In Multispectral Images
,
1989,
Photonics West - Lasers and Applications in Science and Engineering.
[7]
Steve Shafer,et al.
Image understanding research at Carnegie Mellon
,
1989
.
[8]
Bir Bhanu,et al.
Segmentation of natural scenes
,
1987,
Pattern Recognit..
[9]
Keith Price,et al.
Picture Segmentation Using a Recursive Region Splitting Method
,
1998
.
[10]
Harry G. Barrow,et al.
Scene modeling: a structural basis for image description
,
1980
.
[11]
Takeo Kanade,et al.
Mapping Image Properties into Shape Constraints: Skewed Symmetry, Affine-Transformable Patterns, and the Shape-from-Texture Paradigm
,
1983
.
[12]
Berthold K. P. Horn.
Understanding Image Intensities
,
1977,
Artif. Intell..
[13]
Sang Wook Lee,et al.
Image Segmentation with Detection of Highlights and Inter-Reflections Using Color
,
1989
.
[14]
H C Lee,et al.
Method for computing the scene-illuminant chromaticity from specular highlights.
,
1986,
Journal of the Optical Society of America. A, Optics and image science.
[15]
Bui Tuong Phong.
Illumination for computer generated pictures
,
1975,
Commun. ACM.
[16]
Steven A. Shafer,et al.
Using color to separate reflection components
,
1985
.