Quantification of facial movements by motion capture

Face motion capture is often used for movies (http:// www.cgchannel.com/2012/01/interview-avatar-mocapproducer-james-knight/), video games or computer facial animation (Parke and Waters 1996). Few concern the modelling of facial movement by numerical simulation of soft tissues mechanical behaviour (Barbarino et al. 2009) or the quantification of facial movement (Popat et al. 2009). From an animation point of view, using motion capture by optical cameras with markers or stereocorrelation on the real face, allows reproducing a real character’s face expressions in a virtual world. From a biomechanical point of view, the face motion capture includes all common difficulties of motion capture analysis and namely the fact that the motion of soft face tissues is not directly connected with the bone or joint motion. Moreover, most facial movements are the result of thin muscles not only attached to bony structures but also to the skin (Reda and Sumida 2009). In addition, large interindividual variation of the muscle anatomy is observed (Pessa et al. 1998). Face motion analysis also has a clinical relevance for the maxillo-facial surgery as it provides quantitative criteria ensuring an efficient followup of patients with facial pathologies, e.g. facial paralysis, removing of a facial tumour or facial transplantation. The aim of this study was to propose a full methodology to perform face motion capture with biomechanical and clinical relevance.