Virtual Reality and Affective Computing Techniques for Face-to-face Communication

We present a multi-modal affective virtual environment (VE) for job interview training. The proposed platform aims to support real-time emotion-based simulations between an ECA and a human. The first goal is to train candidates (students, job hunters, etc.) to better master their emotional states and behavioral skills. The users’ emotional and behavior states will be assessed using different human-machine interfaces and biofeedback sensors. Collected data will be processed in real-time by a behavioral engine. A preliminary experiment was carried out to analyze the correspondence between the users’ perceived emotional states and the collected data. Participants were instructed to look at a series of sixty IAPS pictures and rate each picture on the following dimensions : joy, anger, surprise, disgust, fear and sadness.

[1]  L. Rothkrantz,et al.  Toward an affect-sensitive multimodal human-computer interaction , 2003, Proc. IEEE.

[2]  Jennifer Healey,et al.  SmartCar: detecting driver stress , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[3]  A. Mehrabian Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament , 1996 .

[4]  Beverly Park Woolf,et al.  Affect-aware tutors: recognising and responding to student affect , 2009, Int. J. Learn. Technol..

[5]  Rafael A. Calvo,et al.  Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications , 2010, IEEE Transactions on Affective Computing.

[6]  Klaus R. Scherer,et al.  Vocal communication of emotion: A review of research paradigms , 2003, Speech Commun..

[7]  P. Ekman,et al.  Facial Action Coding System: Manual , 1978 .

[8]  M. Bradley,et al.  Emotion and motivation I: defensive and appetitive reactions in picture processing. , 2001, Emotion.

[9]  Zakia Hammal,et al.  Holistic and Feature-based Information Towards Dynamic Multi-expressions Recognition , 2010, VISAPP.

[10]  Alex Pentland,et al.  Automatic spoken affect classification and analysis , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[11]  Christine L. Lisetti,et al.  Toward multimodal fusion of affective cues , 2006, HCM '06.

[12]  C. Darwin The Expression of Emotion in Man and Animals , 2020 .

[13]  Olivier Villon,et al.  Modeling affective evaluation of multimedia contents : user models to associate subjective experience, physiological expression and contents description , 2007 .

[14]  Mitsuru Ishizuka,et al.  Recognizing, Modeling, and Responding to Users' Affective States , 2005, User Modeling.

[15]  Jennifer Healey,et al.  Toward Machine Emotional Intelligence: Analysis of Affective Physiological State , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[17]  Takeo Kanade,et al.  Recognizing lower face action units for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[18]  Nicu Sebe,et al.  MULTIMODAL EMOTION RECOGNITION , 2005 .

[19]  Christine L. Lisetti,et al.  Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals , 2004, EURASIP J. Adv. Signal Process..