Web-Based Embodied Conversational Agents and Older People

Within Human-Computer Interaction, there has recently been an important turn to embodied and voice-based interaction. In this chapter, we discuss our ongoing research on building online Embodied Conversational Agents (ECAs), specifically, their interactive 3D web graphics aspects. We present ECAs based on our technological pipeline, which integrates a number of free online editors, such as Adobe Fuse CC or MakeHuman, and standards, mainly BML (Behaviour Markup Language). We claim that making embodiment available for online ECAs is attainable, and advantageous over current alternatives, mostly desktop-based. In this chapter we also report on initial results of activities aimed to explore the physical appearance of ECAs for older people. A group of them (N = 14) designed female ECAs. Designing them was easy and great fun. The perspective on older-adult HCI introduced in this chapter is mostly technological, allowing for rapid online experimentations to address key issues, such as anthropomorphic aspects, in the design of ECAs with, and for, older people.

[1]  Marco Romeo Automated processes and intelligent tools in CG media production , 2016 .

[2]  Dirk Heylen,et al.  Open Challenges in Modelling, Analysis and Synthesis of Human Behaviour in Human–Human and Human–Machine Interactions , 2015, Cognitive Computation.

[3]  Juan Martínez-Miranda,et al.  Embodied Conversational Agents for the Detection and Prevention of Suicidal Behaviour: Current Applications and Open Challenges , 2017, Journal of Medical Systems.

[4]  Ari Shapiro,et al.  Avatar reshaping and automatic rigging using a deformable model , 2015, MIG.

[5]  Alun Evans,et al.  WebGLStudio: a pipeline for WebGL scene creation , 2013, Web3D '13.

[6]  Alun Evans,et al.  A pipeline for the creation of progressively rendered web 3D scenes , 2017, Multimedia Tools and Applications.

[7]  Georgios Meditskos,et al.  KRISTINA: A Knowledge-Based Virtual Conversation Agent , 2017, PAAMS.

[8]  P. Ekman,et al.  What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .

[9]  Li Wei,et al.  A Practical Model for Live Speech-Driven Lip-Sync , 2015, IEEE Computer Graphics and Applications.

[10]  Cynthia Breazeal,et al.  "Hey Google is it OK if I eat you?": Initial Explorations in Child-Agent Interaction , 2017, IDC.

[11]  Russell Beale,et al.  Affective interaction: How emotional agents affect users , 2009, Int. J. Hum. Comput. Stud..

[12]  Jessica K. Hodgins,et al.  Assessing naturalness and emotional intensity: a perceptual study of animated facial motion , 2014, SAP.

[13]  Jamie Ng,et al.  Investigating gesture-based avatar game representations in teenagers, younger and older adults , 2016, Entertain. Comput..

[14]  Josep Blat,et al.  Older people’s production and appropriation of digital videos: an ethnographic study , 2017, Behav. Inf. Technol..

[15]  Catherine Pelachaud,et al.  Expressive Body Animation Pipeline for Virtual Agent , 2012, IVA.

[16]  Timothy W. Bickmore,et al.  'It's just like you talk to a friend' relational agents for older adults , 2005, Interact. Comput..

[17]  Alexis Héloir,et al.  EMBR: A realtime animation engine for interactive embodied agents , 2009, ACII.

[18]  Sylvie Gibet,et al.  Challenges for the Animation of Expressive Virtual Characters: The Standpoint of Sign Language and Theatrical Gestures , 2014, Dance Notations and Robot Motion.

[19]  Leah Findlater,et al.  "Accessibility Came by Accident": Use of Voice-Controlled Intelligent Personal Assistants by People with Disabilities , 2018, CHI.

[20]  Maria Ebling,et al.  Can Cognitive Assistants Disappear? , 2016, IEEE Pervasive Comput..

[21]  Jaakko Lehtinen,et al.  Audio-driven facial animation by joint end-to-end learning of pose and emotion , 2017, ACM Trans. Graph..

[22]  Josep Blat,et al.  3D graphics on the web: A survey , 2014, Comput. Graph..

[23]  Yvonne Rogers,et al.  Does he take sugar?: moving beyond the rhetoric of compassion , 2013, INTR.

[24]  Martin Hautzinger,et al.  Defining and Predicting Patterns of Early Response in a Web-Based Intervention for Depression , 2017, Journal of medical Internet research.

[25]  David Griol,et al.  The Conversational Interface: Talking to Smart Devices , 2016 .

[26]  Jessica K. Hodgins,et al.  Using an Interactive Avatar's Facial Expressiveness to Increase Persuasiveness and Socialness , 2015, CHI.

[27]  George Demiris,et al.  Pilot testing a digital pet avatar for older adults , 2017, Geriatric nursing.

[28]  Timothy W. Bickmore,et al.  The Right Agent for the Job? - The Effects of Agent Visual Appearance on Task Domain , 2014, IVA.

[29]  Romina Carrasco Designing Virtual Avatars to Empower Social Participation among Older Adults , 2017, CHI Extended Abstracts.

[30]  Chun Chen,et al.  Real-time speech-driven animation of expressive talking faces , 2011, Int. J. Gen. Syst..

[31]  G. Lakoff,et al.  Metaphors We Live by , 1981 .

[32]  Ho Ming Lau,et al.  Embodied Conversational Agents in Clinical Psychology: A Scoping Review , 2017, Journal of medical Internet research.

[33]  Gerard Llorach,et al.  Say Hi to Eliza - An Embodied Conversational Agent on the Web , 2017, IVA.

[34]  Philip J. Guo Older Adults Learning Computer Programming: Motivations, Frustrations, and Design Opportunities , 2017, CHI.

[35]  Gerard Llorach,et al.  Web-Based Live Speech-Driven Lip-Sync , 2016, 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES).

[36]  Stefan Kopp,et al.  Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.

[37]  Rachel K. E. Bellamy,et al.  At Face Value , 2021, Bigger Than Life.

[38]  Norman I. Badler,et al.  A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception , 2015, Comput. Graph. Forum.

[39]  A. Murat Tekalp,et al.  Face and 2-D mesh animation in MPEG-4 , 2000, Signal Process. Image Commun..

[40]  James C. Lester,et al.  Face-to-Face Interaction with Pedagogical Agents, Twenty Years Later , 2016, International Journal of Artificial Intelligence in Education.

[41]  Stefan Kopp,et al.  The Next Step towards a Function Markup Language , 2008, IVA.