Face recognition algorithms as models of human face processing

We evaluated the adequacy of computational algorithms as models of human face processing by looking at how the algorithms and humans process individual faces. By comparing model- and human-generated measures of the similarity between pairs of faces, we were able to assess the accord between several automatic face recognition algorithms and human perceivers. Multidimensional scaling (MDS) was used to create a spatial representation of the subject response patterns. Next, the model response patterns were projected into this space. The results revealed a common bimodal structure for both the subjects and for most of the models. The bimodal subject structure reflected strategy differences in making similarity decisions. For the models, the bimodal structure was related to combined aspects of the representations and the distance metrics used in the implementations.