A low cost framework for real-time marker based 3-D human expression modeling

This work presents a robust, and low-cost framework for real-time marker based 3-D human expression modeling using off-the-shelf stereo web-cameras and inexpensive adhesive markers applied to the face. The system has low computational requirements, runs on standard hardware, and is portable with minimal set-up time and no training. It does not require a controlled lab environment (lighting or set-up) and is robust undervarying conditions, i.e. illumination, facial hair, or skin tone variation. Stereo web-cameras perform 3-D marker tracking to obtain head rigid motion and the non-rigid motion of expressions. Tracked markers are then mapped onto a 3-D face model with a virtual muscle animation system. Muscle inverse kinematics update muscle contraction parameters based on marker motion in order to create a virtual character’s expression performance. The parametrization of the muscle-based animation encodes a face performance with little bandwidth. Additionally, a radial basis function mapping approach was used to easily remap motion capture data to any face model. In this way the automated creation of a personalized 3-D face model and animation system from 3-D data is elucidated. The expressive power of the system and its ability to recognize new expressions was evaluated on a group of test subjects with respect to the six universally recognized facial expressions. Results show that the use of abstract muscle definition reduces the effect of potential noise in the motion capture data and allows the seamless animation of any virtual anthropomorphic face model with data acquired through human face performance.

[1]  Jing Xiao,et al.  Real-time combined 2D+3D active appearance models , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[2]  Georgy L. Gimel'farb,et al.  An Evaluation of Three Popular Computer Vision Approaches for 3-D Face Synthesis , 2006, SSPR/SPR.

[3]  Alexander Woodward,et al.  Combining Computer Graphics and Image Processing for Low Cost Realistic 3D Face Generation and Animation , 2005, MVA.

[4]  Roger Y. Tsai,et al.  A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses , 1987, IEEE J. Robotics Autom..

[5]  Keith Waters,et al.  Computer facial animation , 1996 .

[6]  Ronald Fedkiw,et al.  Automatic determination of facial muscle activations from sparse motion capture marker data , 2005, SIGGRAPH '05.

[7]  Hyeong-Seok Ko,et al.  Performance-driven muscle-based facial animation , 2001, Comput. Animat. Virtual Worlds.

[8]  P. Delmas,et al.  A 3D video scanner for face performance capture , 2008, 2008 23rd International Conference Image and Vision Computing New Zealand.

[9]  Demetri Terzopoulos,et al.  Realistic modeling for facial animation , 1995, SIGGRAPH.

[10]  David Cristinacce,et al.  Automatic feature localisation with constrained local models , 2008, Pattern Recognit..

[11]  Demetri Terzopoulos,et al.  Physically-based facial modelling, analysis, and animation , 1990, Comput. Animat. Virtual Worlds.

[12]  Jun-yong Noh,et al.  Animated deformations with radial basis functions , 2000, VRST '00.

[13]  A. Woodward,et al.  Comparison of Structured Lighting Techniques with a View for Facial Reconstruction , 2005 .

[14]  Timothy F. Cootes,et al.  Active Appearance Models , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Peter Robinson,et al.  3D Constrained Local Model for rigid and non-rigid facial tracking , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[16]  Renaud Séguier,et al.  Facial animation retargeting and control based on a human appearance space , 2010, Comput. Animat. Virtual Worlds.

[17]  A. Woodward,et al.  Computer Vision for Low Cost 3-D Golf Ball and Club Tracking , 2005 .

[18]  Yang Wang,et al.  Enforcing convexity for improved alignment with constrained local models , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[19]  Chris Welman,et al.  INVERSE KINEMATICS AND GEOMETRIC CONSTRAINTS FOR ARTICULATED FIGURE MANIPULATION , 1993 .

[20]  Timothy F. Cootes,et al.  Interpreting face images using active appearance models , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[21]  Keith Waters,et al.  A muscle model for animation three-dimensional facial expression , 1987, SIGGRAPH.

[22]  Samuel R. Buss,et al.  Selectively Damped Least Squares for Inverse Kinematics , 2005, J. Graph. Tools.

[23]  A. Woodward,et al.  Towards a Low Cost Realistic Human Face Modelling and Animation Framework , 2004 .

[24]  Georgy L. Gimel'farb,et al.  An interactive 3D video system for human facial reconstruction and expression modeling , 2012, J. Vis. Commun. Image Represent..

[25]  Gaspard Breton,et al.  Facial animation retargeting and control based on a human appearance space , 2010 .

[26]  Peter Robinson,et al.  FAIM: integrating automated facial affect analysis in instant messaging , 2004, IUI '04.

[27]  Jefferson Montgomery,et al.  Playable universal capture: compression and real-time sequencing of image-based facial animation , 2006, SIGGRAPH Courses.

[28]  Jing Xiao,et al.  Vision-based control of 3D facial animation , 2003, SCA '03.

[29]  Richard K. Beatson,et al.  Reconstruction and representation of 3D objects with radial basis functions , 2001, SIGGRAPH.

[30]  Georgy L. Gimel'farb,et al.  Low Cost Virtual Face Performance Capture Using Stereo Web Cameras , 2007, PSIVT.

[31]  Hao Li,et al.  Realtime performance-based facial animation , 2011, ACM Trans. Graph..

[32]  Ronald Fedkiw,et al.  Automatic determination of facial muscle activations from sparse motion capture marker data , 2005, ACM Trans. Graph..

[33]  Wan-Chun Ma,et al.  A framework for locally retargeting and rendering facial performance , 2011, Comput. Animat. Virtual Worlds.

[34]  Radu Nicolescu,et al.  Concurrent propagation for solving ill-posed problems of global discrete optimisation , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).

[35]  P. Ekman,et al.  Constants across cultures in the face and emotion. , 1971, Journal of personality and social psychology.

[36]  Andrew Gardner,et al.  Animatable Facial Reflectance Fields , 2004 .

[37]  Ming Ouhyoung,et al.  Mirror MoCap: Automatic and efficient capture of dense 3D facial motion parameters from video , 2005, The Visual Computer.

[38]  Takeo Kanade,et al.  Real-time combined 2D+3D active appearance models , 2004, CVPR 2004.

[39]  Cecilia Mascolo,et al.  EmotionSense: a mobile phones based adaptive platform for experimental social psychology research , 2010, UbiComp.