Temporal Selective Max Pooling Towards Practical Face Recognition

In this report, we deal with two challenges when building a real-world face recognition system the pose variation in uncontrolled environment and the computational expense of processing a video stream. First, we argue that the frame-wise feature mean is unable to characterize the variation among frames. We propose to preserve the overall pose diversity if we want the video feature to represent the subject identity. Then identity will be the only source of variation across videos since pose varies even within a single video. Following such an untangling variation idea, we present a pose-robust face verification algorithm with each video represented as a bag of frame-wise CNN features. Second, instead of simply using all the frames, we highlight the algorithm at the key frame selection. It is achieved by pose quantization using pose distances to K-means centroids, which reduces the number of feature vectors from hundreds to K while still preserving the overall diversity. The recognition is implemented with a rank-list of oneto-one similarities (i.e., verification) using the proposed video representation. On the official 5000 video-pairs of the YouTube Face dataset, our algorithm achieves a comparable performance with state-of-the-art that averages over deep features of all frames. Particularly, the proposed generic algorithm is verified on a public dataset and yet applicable in real-world systems.

[1]  Subhashini Venugopalan,et al.  Translating Videos to Natural Language Using Deep Recurrent Neural Networks , 2014, NAACL.

[2]  Marcus Rohrbach,et al.  Translating Videos to Natural Language Using Deep Recurrent Neural Networks , 2014, NAACL.

[3]  Gang Hua,et al.  Labeled Faces in the Wild: A Survey , 2016 .

[4]  Jean Ponce,et al.  A Theoretical Analysis of Feature Pooling in Visual Recognition , 2010, ICML.

[5]  Sander Dieleman,et al.  Beyond Temporal Pooling: Recurrence and Temporal Convolutions for Gesture Recognition in Video , 2015, International Journal of Computer Vision.

[6]  Trevor Darrell,et al.  Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.

[7]  Andrew Zisserman,et al.  Deep Face Recognition , 2015, BMVC.

[8]  Mário Marques Fernandes,et al.  ADVANCES IN FACE DETECTION AND FACIAL IMAGE ANALYSIS , 2018 .

[9]  Xiaogang Wang,et al.  Deep Learning Face Representation by Joint Identification-Verification , 2014, NIPS.

[10]  Ming Yang,et al.  DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Shiguang Shan,et al.  A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database , 2015, IEEE Transactions on Image Processing.

[12]  Trevor Darrell,et al.  Long-term recurrent convolutional networks for visual recognition and description , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  James Philbin,et al.  FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Jean Ponce,et al.  Learning mid-level features for recognition , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[15]  Dmitry B. Goldgof,et al.  Ieee Transactions on Image Processing Fusion of Physically-based Registration and Deformation Modeling for Nonrigid Motion Analysis 1 , 2007 .

[16]  Tal Hassner,et al.  Face recognition in unconstrained videos with matched background similarity , 2011, CVPR 2011.