Active Labeling of Facial Feature Points

Although considerable progress has been made in the field of facial feature point detection and tracking, accurate feature point tracking is still very challenging. Manually feature point labeling and correction are time consuming and labor intensive. To alleviate this problem, an active feature point labeling method is proposed in this paper. First, the spatial relations among feature points are modeled by a Bayesian Network. Second, the mutual information between a feature point and the remaining feature points is calculated in two steps: in the first step, to identify the most informative facial region, the mutual information between one facial sub-region and the other sub-regions is calculated, in the second step, the mutual information between one feature point and the other feature points in the most informative facial sub-region is established to rank the facial feature points. Users provide labels of the feature points according to their mutual information in descending order. After that, the human corrections and the image measurements are integrated by the Bayesian Network to produce the refined annotations. Simulative experiments on the extended Cohn-Kanade (CK+) database demonstrate the effectiveness of our approach.

[1]  Qiang Ji,et al.  Efficient Structure Learning of Bayesian Networks using Constraints , 2011, J. Mach. Learn. Res..

[2]  Barry-John Theobald,et al.  Robust facial feature tracking using selected multi-resolution linear predictors , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[3]  Maja Pantic,et al.  Fully Automatic Recognition of the Temporal Phases of Facial Actions , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[4]  Katherine B. Martin,et al.  Facial Action Coding System , 2015 .

[5]  Takeo Kanade,et al.  Recognizing Action Units for Facial Expression Analysis , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Stan Z. Li,et al.  Direct appearance models , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[7]  Albert Ali Salah,et al.  A Statistical Method for 2-D Facial Landmarking , 2012, IEEE Transactions on Image Processing.

[8]  Qiang Ji,et al.  Active Image Labeling and Its Application to Facial Action Labeling , 2008, ECCV.

[9]  Timothy F. Cootes,et al.  Active Shape Models-Their Training and Application , 1995, Comput. Vis. Image Underst..

[10]  Takeo Kanade,et al.  The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[11]  Maja Pantic,et al.  Particle filtering with factorized likelihoods for tracking facial features , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[12]  Qiang Ji,et al.  A hierarchical framework for simultaneous facial activity tracking , 2011, Face and Gesture 2011.

[13]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Timothy F. Cootes,et al.  Active Appearance Models , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Yang Wang,et al.  Robust facial feature tracking under varying face pose and facial expression , 2007, Pattern Recognit..

[16]  Joachim M. Buhmann,et al.  Distortion Invariant Object Recognition in the Dynamic Link Architecture , 1993, IEEE Trans. Computers.

[17]  Qiang Ji,et al.  Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Jeff G. Schneider,et al.  Automatic construction of active appearance models as an image coding problem , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.