Enhancing nonverbal human computer interaction with expression recognition

This paper describes an integrated system for human emotion recognition, which is used to provide feedback about the relevance or impact of the information that is presented to the user. Other techniques in this field extract explicit motion fields from the areas of interest and classify them with the help of templates or training sets; the proposed system, however, compares indication of muscle activation from the human face to data taken from similar actions of a 3-d head model. This comparison takes place at curve level, with each curve being drawn from detected feature points in an image sequence or from selected vertices of the polygonal model. The result of this process is identification of the muscles that contribute to the detected motion; this conclusion can then be used in conjunction with the Mimic Language, a table structure that maps groups of muscles to emotions. This method can be applied to either frontal or rotated views, as the curves that are calculated are easier to rotate in 3-d space than motion vector fields. The notion of describing motion with specific points is also supported in MPEG-4 and the relevant encoded data can be used in the same context, to eliminate the need to use machine vision techniques.

[1]  George N. Votsis,et al.  Emotion recognition in human-computer interaction , 2001, IEEE Signal Process. Mag..