Enhancement of human computer interaction with facial electromyographic sensors

In this paper we describe a way to enhance human computer interaction using facial Electromyographic (EMG) sensors. Indeed, to know the emotional state of the user enables adaptable interaction specific to the mood of the user. This way, Human Computer Interaction (HCI) will gain in ergonomics and ecological validity. While expressions recognition systems based on video need exaggerated facial expressions to reach high recognition rates, the technique we developed using electrophysiological data enables faster detection of facial expressions and even in the presence of subtle movements. Features from 8 EMG sensors located around the face were extracted. Gaussian models for six basic facial expressions - anger, surprise, disgust, happiness, sadness and neutral - were learnt from these features and provide a mean recognition rate of 92%. Finally, a prototype of one possible application of this system was developed wherein the output of the recognizer was sent to the expressions module of a 3D avatar that then mimicked the expression.

[1]  Gwen Littlewort,et al.  Automatic Recognition of Facial Actions in Spontaneous Expressions , 2006, J. Multim..

[2]  U. Dimberg,et al.  Rapid facial reactions to emotional facial expressions. , 1998, Scandinavian journal of psychology.

[3]  L.B.P. Ang,et al.  Facial expression recognition through pattern analysis of facial muscle movements utilizing electromyogram sensors , 2004, 2004 IEEE Region 10 Conference TENCON 2004..

[4]  K G Munhall,et al.  A model of facial biomechanics for speech production. , 1999, The Journal of the Acoustical Society of America.

[5]  Javier R. Movellan,et al.  Learning to Make Facial Expressions , 2009, 2009 IEEE 8th International Conference on Development and Learning.

[6]  Takeo Kanade,et al.  Recognizing Action Units for Facial Expression Analysis , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Colin Murray Parkes Obe Md Dpm FRCPsych,et al.  Seventh International Conference , 2009 .

[8]  Gerald Krell,et al.  Facial Expression Recognition with Multi-channel Deconvolution , 2009, 2009 Seventh International Conference on Advances in Pattern Recognition.

[9]  Tanja Schultz,et al.  Towards Speaker-adaptive Speech Recognition based on Surface Electromyography , 2009, BIOSIGNALS.

[10]  Minoru Hashimoto,et al.  Development and Control of a Face Robot Imitating Human Muscular Structures , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Shigeo Morishima,et al.  Face analysis and synthesis , 2001, IEEE Signal Process. Mag..

[12]  A. J. Fridlund,et al.  Guidelines for human electromyographic research. , 1986, Psychophysiology.

[13]  Cuntai Guan,et al.  Multiclass voluntary facial expression classification based on Filter Bank Common Spatial Pattern , 2008, 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[14]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[15]  P. Ekman,et al.  Constants across cultures in the face and emotion. , 1971, Journal of personality and social psychology.

[16]  Tanja Schultz,et al.  Synthesizing speech from electromyography using voice transformation techniques , 2009, INTERSPEECH.