Face Recognition Based on Multiple Representations: Splitting the Error Space

In recent years numerous algorithms have been proposed for face recognition [1] and much progress has been made toward this direction. Most of these algorithms achieve high rates of correct recognition only under very small variations in illumination, scale, facial expressions and perspective angle or pose transformations [2, 3]. The inefficiency of the presented algorithms under extreme variations of the above factors is quite reasonable. Recent investigations have shown that the proposed face representation schemes derive greater variability in a given face under changes in scale, illumination, perspective angle and expression, than different faces when these three factors are held constant. In other words intra-class variance is larger than the inter-class one [2].

[1]  David Vernon Machine vision , 1991 .

[2]  Stefanos D. Kollias,et al.  Face extraction from non-uniform background and recognition in compressed domain , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[3]  Alex Pentland,et al.  Probabilistic Visual Learning for Object Representation , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[5]  Ah Chung Tsoi,et al.  Face recognition: a convolutional neural-network approach , 1997, IEEE Trans. Neural Networks.

[6]  Rama Chellappa,et al.  Human and machine recognition of faces: a survey , 1995, Proc. IEEE.

[7]  Zi-Quan Hong,et al.  Algebraic feature extraction of image for recognition , 1991, Pattern Recognit..

[8]  John Daugman,et al.  Face and Gesture Recognition: Overview , 1997, IEEE Trans. Pattern Anal. Mach. Intell..