A Multi-modal Virtual Environment to Train for Job Interview

This paper presents a multi-modal interactive virtual environment (VE) to train for job interview. The proposed platform aims to train candidates (students, job hunters, etc.) to better master their emotional state and behavioral skills. The candidates will interact with a virtual recruiter represented by an Embodied Conversational Agent (ECA). Both emotional and behavior states will be assessed using human-machine interfaces and biofeedback sensors. Contextual questions will be asked by the ECA to measure the technical skills of the candidates. Collected data will be processed in real-time by a behavioral engine to allow a realistic multi-modal dialogue between the ECA and the candidate. This work represents a socio-technological rupture opening the way to new possibilities in different areas such as professional or medical applications.

[1]  A. Mehrabian Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament , 1996 .

[2]  Alex Pentland,et al.  Automatic spoken affect classification and analysis , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[3]  Christine L. Lisetti,et al.  Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals , 2004, EURASIP J. Adv. Signal Process..

[4]  Panagiotis D. Bamidis,et al.  Towards an Emotion Specification Method: Representing Emotional Physiological Signals , 2007, Twentieth IEEE International Symposium on Computer-Based Medical Systems (CBMS'07).

[5]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[6]  Jennifer Healey,et al.  SmartCar: detecting driver stress , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[7]  P. Ekman,et al.  Facial Action Coding System: Manual , 1978 .

[8]  Haiying Wang,et al.  A markup language for electrocardiogram data acquisition and analysis (ecgML) , 2003, BMC Medical Informatics Decis. Mak..

[9]  Zakia Hammal,et al.  Holistic and Feature-based Information Towards Dynamic Multi-expressions Recognition , 2010, VISAPP.

[10]  L. Rothkrantz,et al.  Toward an affect-sensitive multimodal human-computer interaction , 2003, Proc. IEEE.

[11]  Christine L. Lisetti,et al.  Toward multimodal fusion of affective cues , 2006, HCM '06.

[12]  C. Darwin The Expression of Emotion in Man and Animals , 2020 .

[13]  Olivier Villon,et al.  Modeling affective evaluation of multimedia contents : user models to associate subjective experience, physiological expression and contents description , 2007 .

[14]  Mitsuru Ishizuka,et al.  Recognizing, Modeling, and Responding to Users' Affective States , 2005, User Modeling.

[15]  Jennifer Healey,et al.  Toward Machine Emotional Intelligence: Analysis of Affective Physiological State , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Vladimir Pavlovic,et al.  Toward multimodal human-computer interface , 1998, Proc. IEEE.

[17]  Takeo Kanade,et al.  Recognizing lower face action units for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[18]  Nicu Sebe,et al.  MULTIMODAL EMOTION RECOGNITION , 2005 .

[19]  Antonio R. Damasio L'erreur de Descartes : la raison des émotions , 2001 .

[20]  Stefan Kopp,et al.  The Behavior Markup Language: Recent Developments and Challenges , 2007, IVA.

[21]  Rafael A. Calvo,et al.  Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications , 2010, IEEE Transactions on Affective Computing.

[22]  Klaus R. Scherer,et al.  Vocal communication of emotion: A review of research paradigms , 2003, Speech Commun..