Human robot interactions have become essential for many people in recent years. Facial expression reproduction is expected to be applied in human?robot interactions for conveying feelings. In this paper, geometrical features are defined to describe the generalized features of facial expressions, so that they can be used to recognize human expressions and replicate them in a robot. Using the geometrical features, a generalized distance between two expressions was proposed to classify expression types after all weights of geometrical features had been obtained. An additive actuation method based on the Blendshape model was also used to achieve the target expression with a certain accuracy. Finally, some simulation experiments were conducted to verify these methods using BU-4DFE, including the visualization and analysis of geometrical features, the calculation of weight and its power, and the clarification and actuation of a target expression. The visualization of geometrical features indicated the intuitive performance, while the analysis of the features confirmed that they follow certain geometrical rules. All weights and the power were calculated to confirm the categorization of expressions. The classification rates and relative errors proved the reliability of the geometrical features and the calculated weight results. Finally, the results of additive actuation of feature points provided a significant reference for designing the actuation points of a robot face, which depends on how high an accuracy of expressions we want to achieve. The methods and results of this paper can be applied to developing a practical robot face.
[1]
James Philbin,et al.
FaceNet: A unified embedding for face recognition and clustering
,
2015,
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2]
P. Ekman,et al.
What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS)
,
2005
.
[3]
Wang Yu,et al.
Development of the humanoid head portrait robot system with flexible face and expression
,
2004,
2004 IEEE International Conference on Robotics and Biomimetics.
[4]
Lijun Yin,et al.
A high-resolution 3D dynamic facial expression database
,
2008,
2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.
[5]
T. Tsuji,et al.
Development of the Face Robot SAYA for Rich Facial Expressions
,
2006,
2006 SICE-ICASE International Joint Conference.
[6]
Hiroshi Ishiguro,et al.
A blendshape model for mapping facial motions to an android
,
2007,
2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.