Automatic frontal face annotation and AAM building for arbitrary expressions from a single frontal image only

In recent years, statistically motivated approaches for the registration and tracking of non-rigid objects, such as the Active Appearance Model (AAM), have become very popular. A major drawback of these approaches is that they require manual annotation of all training images which can be tedious and error prone. In this paper, a MPEG-4 based approach for the automatic annotation of frontal face images, having any arbitrary facial expression, from a single annotated frontal image is presented. This approach utilises the MPEG-4 based facial animation system to generate virtual images having different expressions and uses the existing AAM framework to automatically annotate unseen images. The approach demonstrates an excellent generalisability by automatically annotating face images from two different databases.

[1]  Timothy F. Cootes,et al.  Active Shape Models-Their Training and Application , 1995, Comput. Vis. Image Underst..

[2]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.

[3]  Roland Göcke,et al.  A Nonlinear Discriminative Approach to AAM Fitting , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[4]  Terence Sim,et al.  The CMU Pose, Illumination, and Expression Database , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Thomas Vetter,et al.  Face Recognition Based on Fitting a 3D Morphable Model , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Jeff G. Schneider,et al.  Automatic construction of active appearance models as an image coding problem , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Simon Baker,et al.  Active Appearance Models Revisited , 2004, International Journal of Computer Vision.

[8]  Roland Göcke,et al.  Monocular and Stereo Methods for AAM Learning from Video , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Timothy F. Cootes,et al.  Interpreting face images using active appearance models , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[10]  Roland Göcke,et al.  The audio-video australian English speech data corpus AVOZES , 2012, INTERSPEECH.

[11]  Fabio Lavagetto,et al.  The facial animation engine: toward a high-level interface for the design of MPEG-4 compliant animated faces , 1999, IEEE Trans. Circuits Syst. Video Technol..

[12]  Roland Goecke,et al.  Learning active appearance models from image sequences , 2006 .