Towards an Architecture Model for Emotion Recognition in Interactive Systems: Application to a Ballet Dance Show

In the context of the very dynamic and challenging domain of affective computing, we adopt a software engineering point of view on emotion recognition in interactive systems. Our goal is threefold: first, developing an architecture model for emotion recognition. This architecture model emphasizes multimodality and reusability. Second, developing a prototype based on this architecture model. For this prototype we focus on gesturebased emotion recognition. And third, using this prototype for augmenting a ballet dance show.

[1]  M. D. Meijer The contribution of general features of body movement to the attribution of emotions , 1989 .

[2]  Christophe d'Alessandro,et al.  3D Audiovisual Rendering and Real-Time Interactive Control of Expressivity in a Talking Head , 2007, IVA.

[3]  J. Borod The Neuropsychology of emotion , 2000 .

[4]  Stephen Travis Pope,et al.  A cookbook for using the model-view controller user interface paradigm in Smalltalk-80 , 1988 .

[5]  Mark C. Coulson Attributing Emotion to Static Body Postures: Recognition Accuracy, Confusions, and Viewpoint Dependence , 2004 .

[6]  Irving Puntenney,et al.  Color Psychology and Color Therapy , 1950 .

[7]  Catherine Pelachaud,et al.  Computational Model of Believable Conversational Agents , 2003, Communication in Multiagent Systems.

[8]  Christine L. Lisetti,et al.  Toward multimodal fusion of affective cues , 2006, HCM '06.

[9]  Rosalind W. Picard Affective Computing , 1997 .

[10]  Marc Leman,et al.  A multi-layered Conceptual Framework for Expressive Gesture Applications , 2001 .

[11]  Rosalind W. Picard,et al.  Signal processing for recognition of human frustration , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[12]  J. Coutaz,et al.  Chapter 2 FOUNDATIONS FOR A THEORY OF CONTEXTORS , 2002 .

[13]  Antonio Camurri,et al.  Expressive interfaces , 2004, Cognition, Technology & Work.

[14]  Ginevra Castellano,et al.  Recognising Human Emotions from Body Movement and Gesture Dynamics , 2007, ACII.

[15]  Christine L. Lisetti,et al.  Multimodal affective driver interfaces for future cars , 2002, MULTIMEDIA '02.

[16]  Maja Pantic,et al.  Automatic Analysis of Facial Expressions: The State of the Art , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Dimitrios Tzovaras,et al.  Multimodal signal processing and interaction for a driving simulator: Component-based architecture , 2008, Journal on Multimodal User Interfaces.

[18]  Nasa,et al.  A metamodel for the runtime architecture of an interactive system: the UIMS tool developers workshop , 1992, SGCH.

[19]  Antonio Camurri,et al.  Recognizing emotion from dance movement: comparison of spectator recognition and automated techniques , 2003, Int. J. Hum. Comput. Stud..