Clinical interviewing by a virtual human agent with automatic behavior analysis

SimSensei is a Virtual Human (VH) interviewing platform that uses off-the-shelf sensors (i.e., webcams, Microsoft Kinect and a microphone) to capture and interpret real-time audiovisual behavioral signals from users interacting with the VH system. The system was specifically designed for clinical interviewing and health care support by providing a face-to-face interaction between a user and a VH that can automatically react to the inferred state of the user through analysis of behavioral signals gleaned fro parameters. Akin to how non-verbal behavioral signals have an impact on human-to-human interaction and communication, SimSensei aims to capture and infer user state from signals generated from user non-verbal communication to improve engagement between a VH and a user and to quantify user state from the data captured across a 20 minute interview. As well, previous research with SimSensei indicates that users engaging with this automated system, have less fear of evaluation and self-disclose more personal information compared to when they believe the VH -in-the-loop (Lucas et al., 2014). The current study presents results from a sample of military service members (SMs) who were interviewed within the SimSensei system before and after a deployment to Afghanistan. Results indicate that SMs reveal more PTSD symptoms to the SimSensei VH agent than they selfreport on the Post Deployment Health Assessment. Pre/Post deployment facial expression analysis indicated more sad expressions and fewer happy expressions at post deployment.

[1]  Albert A. Rizzo,et al.  User-State Sensing for Virtual Health Agents and TeleHealth Applications , 2013, MMVR.

[2]  Albert Rizzo,et al.  Virtual Reality Standardized Patients for Clinical Training , 2016 .

[3]  David R. Traum,et al.  Multi-party, Multi-issue, Multi-strategy Negotiation for Multi-modal Virtual Agents , 2008, IVA.

[4]  Sin-Hwa Kang,et al.  The impact of avatar realism and anonymity on effective communication via mobile devices , 2013, Comput. Hum. Behav..

[5]  Albert A. Rizzo,et al.  Automatic audiovisual behavior descriptors for psychological disorder analysis , 2014, Image Vis. Comput..

[6]  D. Lazer,et al.  Using reality mining to improve public health and medicine. , 2009, Studies in health technology and informatics.

[7]  Stacy Marsella,et al.  A domain-independent framework for modeling emotion , 2004, Cognitive Systems Research.

[8]  Louis-Philippe Morency,et al.  Autonomous Virtual Human Agents for Healthcare Information Support and Clinical Interviewing , 2016 .

[9]  Jonathan Gratch,et al.  Exploring users' social responses to computer counseling interviewers' behavior , 2014, Comput. Hum. Behav..

[10]  Norman I. Badler,et al.  Creating Interactive Virtual Humans: Some Assembly Required , 2002, IEEE Intell. Syst..

[11]  Javier R. Movellan,et al.  Generalized adaptive view-based appearance model: Integrated framework for monocular head pose estimation , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[12]  Dina Utami,et al.  Improving Access to Online Health Information With Conversational Agents: A Randomized Controlled Experiment , 2016, Journal of medical Internet research.

[13]  Suzanne P. Weisband,et al.  Self disclosure on computer forms: meta-analysis and implications , 1996, CHI.

[14]  J. Hox,et al.  A Comparison of Randomized Response, Computer-Assisted Self-Interview, and Face-to-Face Direct Questioning , 2000 .

[15]  Gwen Littlewort,et al.  Toward Practical Smile Detection , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Peter Robinson,et al.  3D Constrained Local Model for rigid and non-rigid facial tracking , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  A. Joinson Self‐disclosure in computer‐mediated communication: The role of self‐awareness and visual anonymity , 2001 .

[18]  Jonathan Gratch,et al.  Socially Anxious People Reveal More Personal Information with Virtual Counselors That Talk about Themselves using Intimate Human Back Stories , 2012, Annual Review of Cybertherapy and Telemedicine.

[19]  Reginald P. Baker,et al.  New Technology in Survey Research: Computer-Assisted Personal Interviewing (CAPI) , 1992 .

[20]  Andreas Beckenbach,et al.  Computer-Assisted Questioning: the New Survey Methods in the Perception of the Respondents , 1995 .

[21]  David DeVault,et al.  Detection and Computational Analysis of Psychological Signals Using a Virtual Human Interviewing Agent , 2014 .

[22]  J. Talbott Importance of Anonymity to Encourage Honest Reporting in Mental Health Screening After Combat Deployment , 2013 .

[23]  Albert A. Rizzo,et al.  Sorting Out the Virtual Patient: How to Exploit Artificial Intelligence, Game Technology and Sound Educational Practices to Create Engaging Role-Playing Simulations , 2012, Int. J. Gaming Comput. Mediat. Simulations.

[24]  Kallirroi Georgila,et al.  SimSensei kiosk: a virtual human interviewer for healthcare decision support , 2014, AAMAS.

[25]  P. Ekman,et al.  What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .

[26]  H. Chad Lane,et al.  Virtual Humans for Learning , 2013, AI Mag..

[27]  Louis-Philippe Morency,et al.  Modeling Human Communication Dynamics , 2010 .

[28]  Alex Pentland,et al.  Honest Signals - How They Shape Our World , 2008 .