Recognition of Emotional states in Natural Human-Computer Interaction

In this paper we present a non-invasive method for extracting facial expression components from video sequences. We propose a contextual analysis of user state and guide appropriate system actuation. The approach proposed hereafter combines the advantages of MPEG-4 and an active contour model to extract the contours of the facial objects such as: eyes, eyebrows, nose, lips and dimples. The first stage applies a local statistics to distinguish the objects subject of interest. The second stage runs an existing active contour model to define the contours of the objects. Further the facial control points could be spotted on the contours and used for emotion recognition. To distinguish geometric facial features an approach is proposed to compare multiple closed polygons. To validate the theoretical concepts experiments were performed using a normal and a smiling face. A comparison with an existing approach underlines the advantages and disadvantages of the present work.