Presents an algorithm for the detection of faces in images using shared-weight replicated neural networks. A neural net forms rough hypotheses about the position of faces. These hypotheses are then verified using a second neural network. The algorithm applies to images where the size of the faces is unknown a priori. The computational time which is necessary for the complete processing of an image is reasonable. With a classical workstation an image of size 512*512 is treated in 50 seconds including smoothing and normalization of the image. This algorithm can be easily installed on a more specialized machine as the major part of the operation is based on convolutions with kernels of size 5*5 or 8*8. In this paper, the authors assume that the face are well oriented in the image. It is possible to eliminate this assumption by following an approach similar to the one used for the scale problem. A net is trained to be insensitive to the precise orientation of the face. This kind of segmentation algorithm can be applied to other problems where the objects to be detected cannot be characterized easily by its outline or by classical primitives in image processing.
[1]
Venu Govindaraju,et al.
A computational model for face location
,
1990,
[1990] Proceedings Third International Conference on Computer Vision.
[2]
Alex Pentland,et al.
Interactive-time vision: face recognition as a visual behavior
,
1991
.
[3]
D. J. Myers,et al.
Precise location of facial features by a hierarchical assembly of neural nets
,
1991
.
[4]
M. Turk,et al.
Eigenfaces for Recognition
,
1991,
Journal of Cognitive Neuroscience.
[5]
Ashok Samal,et al.
Automatic recognition and analysis of human faces and facial expressions: a survey
,
1992,
Pattern Recognit..
[6]
Ian Craw,et al.
Finding Face Features
,
1992,
ECCV.