A Job Interview Simulation: Social Cue-Based Interaction with a Virtual Character

This paper presents an approach that makes use of a virtual character and social signal processing techniques to create an immersive job interview simulation environment. In this environment, the virtual character plays the role of a recruiter which reacts and adapts to the user's behavior thanks to a component for the automatic recognition of social cues (conscious or unconscious behavioral patterns). The social cues pertinent to job interviews have been identified using a knowledge elicitation study with real job seekers. Finally, we present two user studies to investigate the feasibility of the proposed approach as well as the impact of such a system on users.

[1]  Sidney K. D'Mello,et al.  Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies , 2012, ICMI '12.

[2]  Maurizio Mancini,et al.  Implementing Expressive Gesture Synthesis for Embodied Conversational Agents , 2005, Gesture Workshop.

[3]  Peter Wittenburg,et al.  Annotation by Category: ELAN and ISO DCR , 2008, LREC.

[4]  A. Pease,et al.  Body Language: How to Read Others''Thoughts by Their Gestures (Overcoming Common Problems). Sheldon , 1981 .

[5]  Marc Cavazza,et al.  Emotional input for character-based interactive storytelling , 2009, AAMAS.

[6]  Ron Artstein,et al.  Towards building a virtual counselor: modeling nonverbal behavior during intimate self-disclosure , 2012, AAMAS.

[7]  Dirk Heylen,et al.  Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing , 2012, IEEE Transactions on Affective Computing.

[8]  Björn W. Schuller,et al.  Building Autonomous Sensitive Artificial Listeners , 2012, IEEE Transactions on Affective Computing.

[9]  Ionut Damian,et al.  Natural interaction with culturally adaptive virtual characters , 2012, Journal on Multimodal User Interfaces.

[10]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Radoslaw Niewiadomski,et al.  Laugh-aware virtual agent and its impact on user amusement , 2013, AAMAS.

[12]  Christian Küblbeck,et al.  Face detection and tracking in video sequences using the modifiedcensus transformation , 2006, Image Vis. Comput..

[13]  E. Fridell,et al.  The mechanism for NOx storage , 2000 .

[14]  Alexis Héloir,et al.  REAL-TIME ANIMATION OF INTERACTIVE AGENTS: SPECIFICATION AND REALIZATION , 2010, Appl. Artif. Intell..

[15]  Tobias Baur,et al.  The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time , 2013, ACM Multimedia.

[16]  Elisabeth André,et al.  Comparing Feature Sets for Acted and Spontaneous Speech in View of Automatic Emotion Recognition , 2005, 2005 IEEE International Conference on Multimedia and Expo.

[17]  Robert MacDonald,et al.  Disconnected Youth? Social Exclusion, the ‘Underclass’ & Economic Marginality , 2008 .

[18]  J. Harrigan,et al.  Fooled by a Smile: Detecting Anxiety in Others , 1997 .

[19]  Yukiko I. Nakano,et al.  Investigating culture-related aspects of behavior for virtual characters , 2013, Autonomous Agents and Multi-Agent Systems.

[20]  Yuyu Xu,et al.  Perception Markup Language: Towards a Standardized Representation of Perceived Nonverbal Behaviors , 2012, IVA.

[21]  Jennifer Caroline Greene,et al.  Handbook of communication and social interaction skills , 2003 .

[22]  Mitsuru Ishizuka,et al.  THE EMPATHIC COMPANION: A CHARACTER-BASED INTERFACE THAT ADDRESSES USERS' AFFECTIVE STATES , 2005, Appl. Artif. Intell..

[23]  A. Pentland,et al.  Thin slices of negotiation: predicting outcomes from conversational dynamics within the first 5 minutes. , 2007, The Journal of applied psychology.

[24]  Ashish Kapoor,et al.  Multimodal affect recognition in learning environments , 2005, ACM Multimedia.

[25]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  K. Scherer,et al.  Bodily expression of emotion , 2009 .

[27]  Bianchi-BerthouzeNadia,et al.  Affective Body Expression Perception and Recognition , 2013 .

[28]  R. Kraut,et al.  Social and emotional messages of smiling: An ethological approach. , 1979 .

[29]  Andrea Kleinsmith,et al.  Form as a Cue in the Automatic Recognition of Non-acted Affective Body Expressions , 2011, ACII.

[30]  Nadia Bianchi-Berthouze,et al.  Understanding the Role of Body Movement in Player Engagement , 2012, Hum. Comput. Interact..

[31]  Torild Hammer,et al.  Mental Health and Social Exclusion among Unemployed Youth in Scandinavia. A Comparative Study , 2000 .

[32]  James E. Campion,et al.  The employment interview: A summary and review of recent research. , 1982 .

[33]  Alexis Héloir,et al.  Realizing Multimodal Behavior - Closing the Gap between Behavior Planning and Embodied Agent Presentation , 2010, IVA.

[34]  S. Kollias,et al.  Synthesizing Gesture Expressivity Based on Real Sequences , 2006 .

[35]  David DeVault,et al.  Incremental Dialogue Understanding and Feedback for Multiparty, Multimodal Conversation , 2012, IVA.

[36]  Andrea Kleinsmith,et al.  Affective Body Expression Perception and Recognition: A Survey , 2013, IEEE Transactions on Affective Computing.

[37]  Michael Kipp,et al.  Visual SceneMaker—a tool for authoring interactive virtual characters , 2012, Journal on Multimodal User Interfaces.