Simultaneous optimization of class configuration and feature space for object recognition

A new algorithm for object classification based on an extension of the Fisher's discriminant analysis is presented. Object recognition algorithms using the standard Fisher's algorithm, such as the Fisherface, train the classifier using sample-class pairs, where, for the classes, object categories determined in the application systems are used directly. In contrast, the new algorithm automatically produces subclasses, within each predetermined category that are actually used for classification, via unsupervised learning. In order to perform this, we combine the Fisher's discriminant analysis with the Akaike information criterion, optimizing the class configuration, i.e. sample-subclass correspondences, and the feature extraction function simultaneously, thereby improving the potential of linear separability. By applying this new method to face recognition, we show how it outperforms the traditional Fisher-based method.

[1]  Kenji Nagao Face recognition by distribution specific feature extraction , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[2]  Qi Tian,et al.  Discriminant-EM algorithm with application to image retrieval , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[3]  Keinosuke Fukunaga,et al.  Introduction to Statistical Pattern Recognition , 1972 .

[4]  H. Akaike,et al.  Information Theory and an Extension of the Maximum Likelihood Principle , 1973 .

[5]  R. Fisher THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS , 1936 .

[6]  David J. Kriegman,et al.  Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection , 1996, ECCV.

[7]  David J. Kriegman,et al.  Illumination cones for recognition under variable lighting: faces , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).