Face Recognition Vendor Test 2002 Performance Metrics

We present the methodology and recognition performance characteristics used in the Face Recognition Vendor Test 2002. We refine the notion of a biometric imposter, and show that the traditional measures of identification and verification performance, are limiting cases of the open-universe watch list task. The watch list problem generalizes the tradeoff of detection and identification of persons of interest against a false alarm rate. In addition, we use performance scores on disjoint populations to establish a means of computing and displaying distribution-free estimates of the variation of verification vs. false alarm performance. Finally we formalize gallery normalization, which is an extension of previous evaluation methodologies; we define a pair of gallery dependent mappings that can be applied as a post recognition step to vectors of distance or similarity scores. All the methods are biometric non-specific, and applicable to large populations.

[1]  James L. Wayman,et al.  Error rate equations for the general biometric system , 1999, IEEE Robotics Autom. Mag..

[2]  Patrick J. Grother,et al.  The NIST Human ID Evaluation Framework | NIST , 2003 .

[3]  Hyeonjoon Moon,et al.  The FERET evaluation methodology for face-recognition algorithms , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[4]  P. Jonathon Phillips,et al.  Face recognition vendor test 2002 , 2003, 2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443).

[5]  Keinosuke Fukunaga,et al.  Introduction to statistical pattern recognition (2nd ed.) , 1990 .

[6]  P. Jonathon Phillips,et al.  The NIST HumanID Evaluation Framework , 2003, AVBPA.