Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgänger list comparison

Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression. The goal of this work is to develop a face-similarity measure that is largely invariant to these differences. We propose a novel data driven method based on the insight that comparing images of faces is most meaningful when they are in comparable imaging conditions. To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures. Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library. We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).

[1]  Ray A. Jarvis,et al.  Clustering Using a Similarity Measure Based on Shared Near Neighbors , 1973, IEEE Transactions on Computers.

[2]  Sunil Arya,et al.  An optimal algorithm for approximate nearest neighbor searching fixed dimensions , 1998, JACM.

[3]  Shimon Ullman,et al.  Face Recognition: The Problem of Compensating for Changes in Illumination Direction , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Amnon Shashua,et al.  Novel view synthesis in tensor space , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[5]  Aleix M. Martinez,et al.  The AR face database , 1998 .

[6]  Shree K. Nayar,et al.  Ordinal Measures for Image Correspondence , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  A. Martínez,et al.  The AR face databasae , 1998 .

[8]  David W. Jacobs,et al.  In search of illumination invariants , 2001, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[9]  Hyeonjoon Moon,et al.  The FERET Evaluation Methodology for Face-Recognition Algorithms , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  David J. Kriegman,et al.  From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Sethuraman Panchanathan,et al.  Framework for performance evaluation of face recognition algorithms , 2002, SPIE ITCom.

[12]  Terence Sim,et al.  The CMU Pose, Illumination, and Expression (PIE) database , 2002, Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition.

[13]  Alexander M. Bronstein,et al.  Expression-Invariant 3D Face Recognition , 2003, AVBPA.

[14]  Thomas Vetter,et al.  Face Recognition Based on Fitting a 3D Morphable Model , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[16]  Henry Schneiderman,et al.  Face View Synthesis Across Large Angles , 2005, AMFG.

[17]  Andrew Zisserman,et al.  Hello! My name is... Buffy'' -- Automatic Naming of Characters in TV Video , 2006, BMVC.

[18]  Jiansheng Chen,et al.  Pose-tolerant Non-frontal Face Recognition using EBGM , 2008, 2008 IEEE Second International Conference on Biometrics: Theory, Applications and Systems.

[19]  Alexei A. Efros,et al.  IM2GPS: estimating geographic information from a single image , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Marwan Mattar,et al.  Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments , 2008 .

[21]  Cordelia Schmid,et al.  Is that you? Metric learning approaches for face identification , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[22]  Tal Hassner,et al.  Similarity Scores Based on Background Samples , 2009, ACCV.

[23]  Peter Norvig,et al.  The Unreasonable Effectiveness of Data , 2009, IEEE Intelligent Systems.

[24]  Shree K. Nayar,et al.  Attribute and simile classifiers for face verification , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[25]  Gang Hua,et al.  Implicit elastic matching with random projections for pose-variant face recognition , 2009, CVPR.

[26]  Deva Ramanan,et al.  Local distance functions: A taxonomy, new algorithms, and an evaluation , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[27]  Nicolas Pinto,et al.  How far can you get with a modern face recognition test set using only simple features? , 2009, CVPR.

[28]  Sergei Vassilvitskii,et al.  Generalized distances between rankings , 2010, WWW '10.

[29]  Allen Y. Yang,et al.  A Review of Fast l1-Minimization Algorithms for Robust Face Recognition , 2010, ArXiv.

[30]  Marios Savvides,et al.  An extension of multifactor analysis for face recognition based on submanifold learning , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[31]  Jian Sun,et al.  Face recognition with learning-based descriptor , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[32]  Hakan Cevikalp,et al.  Face recognition based on image sets , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[33]  Takeo Kanade,et al.  Multi-PIE , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[34]  Jian Sun,et al.  An associate-predict model for face recognition , 2011, CVPR 2011.

[35]  Tal Hassner,et al.  Face recognition in unconstrained videos with matched background similarity , 2011, CVPR 2011.