Still-to-Video face recognition via weighted scenario oriented discriminant analysis

In Still-to-Video (S2V)face recognition, only a few high resolution images are enrolled for each subject, while the probe is videos of complex variations. As faces present distinct characteristics under different scenarios, recognition in the original space is obviously inefficient. In this paper, we propose a novel discriminant analysis method to learn separate mappings for different scenarios (still, video), and further pursue a common discriminant space based on these mappings. Concretely, by modeling each video as a set of local models, we form the scenario-oriented mapping learning as an Image-Model discriminant analysis framework. The learning objective is formulated by incorporating the intra-class compactness and inter-class separability for good discrimination. Moreover, a weighted learning scheme is introduced to concentrate on the discriminating information of the most confusing samples and then further enhance the performance. Experiments on the COX-S2V dataset demonstrate the effectiveness of the proposed method.

[1]  Larry S. Davis,et al.  Covariance discriminative learning: A natural and efficient approach to image set classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Shaogang Gong,et al.  Recognising trajectories of facial identities using kernel discriminant analysis , 2003, Image and Vision Computing.

[3]  Shiguang Shan,et al.  Coupling Alignments with Recognition for Still-to-Video Face Recognition , 2013, 2013 IEEE International Conference on Computer Vision.

[4]  David J. Kriegman,et al.  Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection , 1996, ECCV.

[5]  Ian T. Jolliffe,et al.  Principal Component Analysis , 2002, International Encyclopedia of Statistical Science.

[6]  Tsuhan Chen,et al.  Video-based face recognition using adaptive hidden Markov models , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[7]  Shiguang Shan,et al.  Fusing Magnitude and Phase Features for Robust Face Recognition , 2012, ACCV.

[8]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[9]  Dahua Lin,et al.  Inter-modality Face Recognition , 2006, ECCV.

[10]  Xiaofei He,et al.  Locality Preserving Projections , 2003, NIPS.

[11]  Dit-Yan Yeung,et al.  Locally Linear Models on Face Appearance Manifolds with Application to Dual-Subspace Based Classification , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[12]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[13]  Shiguang Shan,et al.  Benchmarking Still-to-Video Face Recognition via Partial and Local Linear Discriminant Analysis on COX-S2V Dataset , 2012, ACCV.

[14]  Tal Hassner,et al.  Face recognition in unconstrained videos with matched background similarity , 2011, CVPR 2011.