Conversing with children: cartoon and video people elicit similar conversational behaviors

Interactive animated characters have the potential to engage and educate children, but there is little research on children's interactions with animated characters and real people. We conducted an experiment with 69 children between the ages of 4 and 10 years to investigate how they might engage in conversation differently if their interactive partner appeared as a cartoon character or as a person. A subset of the participants interacted with characters that displayed exaggerated and damped facial motion. The children completed two conversations with an adult confederate who appeared once as herself through video and once as a cartoon character. We measured how much the children spoke and compared their gaze and gesture patterns. We asked them to rate their conversations and indicate their preferred partner. There was no difference in children's conversation behavior with the cartoon character and the person on video, even among those who preferred the person and when the cartoon exhibited altered motion. These results suggest that children will interact with animated characters as they would another person.

[1]  Justine Cassell,et al.  Embodied Conversational Agents: Representation and Intelligence in User Interfaces , 2001, AI Mag..

[2]  V. Rideout,et al.  Zero to Six: Electronic Media in the Lives of Infants, Toddlers and Preschoolers , 2003 .

[3]  Peter Wittenburg,et al.  Annotation by Category: ELAN and ISO DCR , 2008, LREC.

[4]  Simon Baker,et al.  Active Appearance Models Revisited , 2004, International Journal of Computer Vision.

[5]  N. Borgers,et al.  Response Effects in Surveys on Children and Adolescents: The Effect of Number of Response Options, Negative Wording, and Neutral Mid-Point , 2004 .

[6]  D. Perrett,et al.  Computer-enhanced emotion in facial expressions , 1997, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[7]  Timothy F. Cootes,et al.  Active Appearance Models , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[8]  Daniel R. Anderson,et al.  Media and Young Children's Learning , 2008, The Future of children.

[9]  David M. McCord,et al.  The M5–PS–35: A Five-Factor Personality Questionnaire for Preschool Children , 2012, Journal of personality assessment.

[10]  L. Alvin Malesky,et al.  Personality in Preschool Children : Preliminary Psychometrics of the M 5-PS Questionnaire , 2010 .

[11]  U. Hess,et al.  The Intensity of Emotional Facial Expressions and Decoding Accuracy , 1997 .

[12]  K. Walker,et al.  View-based active appearance models , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[13]  P. Costa,et al.  Revised NEO Personality Inventory (NEO-PI-R) and NEO-Five-Factor Inventory (NEO-FFI) , 1992 .

[14]  Michael Tomasello,et al.  Word Learning: A Window on Early Pragmatic Understanding , 1998 .

[15]  Sebastian Drude,et al.  The Language Archive , 2013 .

[16]  Jim Blascovich,et al.  Social evaluations of embodied agents and avatars , 2011, Comput. Hum. Behav..

[17]  N. Borgers,et al.  Children as Respondents in Survey Research: Cognitive Development and Response Quality 1 , 2000 .

[18]  Jessica K. Hodgins,et al.  Video increases the perception of naturalness during remote interactions with latency , 2012, CHI Extended Abstracts.

[19]  Sean Andrist,et al.  Designing effective gaze mechanisms for virtual agents , 2012, CHI.

[20]  Jeffrey R. Spies,et al.  Effects of damping head movement and facial expression in dyadic conversation using real–time facial expression tracking and synthesized avatars , 2009, Philosophical Transactions of the Royal Society B: Biological Sciences.

[21]  Stuart MacFarlane,et al.  Using the fun toolkit and other survey methods to gather opinions in child computer interaction , 2006, IDC '06.

[22]  Jessica K. Hodgins,et al.  Perceptual effects of damped and exaggerated facial motion in animated characters , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[23]  Winslow Burleson,et al.  Gender-Specific Approaches to Developing Emotionally Intelligent Learning Companions , 2007, IEEE Intelligent Systems.

[24]  Christoph Bartneck,et al.  Subtle emotional expressions of synthetic characters , 2005, Int. J. Hum. Comput. Stud..

[25]  Sharon L. Oviatt,et al.  Amplitude convergence in children²s conversational speech with animated personas , 2002, INTERSPEECH.

[26]  Vero Vanden Abeele,et al.  Measuring product liking in preschool children: An evaluation of the Smileyometer and This or That methods , 2013, Int. J. Child Comput. Interact..

[27]  Mary Nixon,et al.  Children's comprehension of expressive states depicted in a television cartoon , 1989 .

[28]  D. Perrett,et al.  Caricaturing facial expressions , 2000, Cognition.

[29]  Jonathan Gratch,et al.  The Impact of Emotion Displays in Embodied Agents on Emergence of Cooperation with People , 2011, PRESENCE: Teleoperators and Virtual Environments.

[30]  Jeffrey R. Spies,et al.  Mapping and Manipulating Facial Expression , 2009, Language and speech.

[31]  Sharon L. Oviatt Talking to thimble jellies: children²s conversational speech with animated characters , 2000, INTERSPEECH.

[32]  Sharon L. Oviatt,et al.  Adaptation of users² spoken dialogue patterns in a conversational interface , 2002, INTERSPEECH.

[33]  Deborah L. Linebarger,et al.  Infants’ and Toddlers’ Television Viewing and Language Outcomes , 2005 .

[34]  J. Cassell,et al.  Intersubjectivity in human-agent interaction , 2007 .

[35]  Shrikanth S. Narayanan,et al.  Comparison of child-human and child-computer interactions based on manual annotations , 2009, WOCCI '09.

[36]  J. Russell,et al.  Children acquire emotion categories gradually , 2008 .

[37]  Georgene L. Troseth,et al.  Young children's use of video as a source of socially relevant information. , 2006, Child development.