Teaching Language to Deaf Infants with a Robot and a Virtual Human

Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a robot and virtual human designed to augment language exposure for 6-12 month old infants. Human-machine design for infants is challenging, as most screen-based media are unlikely to support learning in [33]. While presently, robots are incapable of the dexterity and expressiveness required for signing, even if it existed, developmental questions remain about the capacity for language from artificial agents to engage infants. Here we engineered the robot and avatar to provide visual language to effect socially contingent human conversational exchange. We demonstrate the successful engagement of our technology through case studies of deaf and hearing infants.

[1]  Hatice Kose-Bagci,et al.  Non-verbal communication with a social robot peer: Towards robot assisted interactive sign language tutoring , 2014, 2014 IEEE-RAS International Conference on Humanoid Robots.

[2]  Scott E. Hudson,et al.  Spatial and Other Social Engagement Cues in a Child-Robot Interaction: Effects of a Sidekick , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[3]  K. Meints,et al.  Baby schema in human and animal faces induces cuteness perception and gaze allocation in children , 2014, Front. Psychol..

[4]  A. Geers,et al.  Will they catch up? The role of age at cochlear implantation in the spoken language development of children with severe to profound hearing loss. , 2007, Journal of speech, language, and hearing research : JSLHR.

[5]  W. Lewis Johnson,et al.  Animated Agents for Procedural Training in Virtual Reality: Perception, Cognition, and Motor Control , 1999, Appl. Artif. Intell..

[6]  David DeVault,et al.  Toward Rapid Development of Multi-Party Virtual Human Negotiation Scenarios , 2011 .

[7]  Mahadev Satyanarayanan,et al.  OpenFace: A general-purpose face recognition library with mobile applications , 2016 .

[8]  A. Merla,et al.  Thermal Signatures of Emotional Arousal: A Functional Infrared Imaging Study , 2007, 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[9]  Ian Marshall,et al.  Development of a legible deaf-signing virtual human , 1999, Proceedings IEEE International Conference on Multimedia Computing and Systems.

[10]  Stacy Marsella,et al.  Tactical Language Training System: An Interim Report , 2004, Intelligent Tutoring Systems.

[11]  J R Saffran,et al.  The acquisition of language by children , 2001, Proceedings of the National Academy of Sciences of the United States of America.

[12]  Susan R. Goldman,et al.  Sam Goes to School: Story Listening Systems in the Classroom , 2004, ICLS.

[13]  Brian Scassellati,et al.  Integrating socially assistive robotics into mental healthcare interventions: applications and recommendations for expanded use. , 2015, Clinical Psychology Review.

[14]  Vittorio Gallese,et al.  The Autonomic Signature of Guilt in Children: A Thermal Infrared Imaging Study , 2013, PloS one.

[15]  M. Krcmar,et al.  Can Toddlers Learn Vocabulary from Television? An Experimental Approach , 2007 .

[16]  Daniel F. Parks,et al.  Complementary effects of gaze direction and early saliency in guiding fixations during free viewing. , 2014, Journal of vision.

[17]  S. Pauen,et al.  Neural correlates of human–animal distinction , 2014 .

[18]  Jeffrey L. Sokolov,et al.  A Local Contingency Analysis of the Fine-Tuning Hypothesis. , 1993 .

[19]  Brian Scassellati,et al.  Emotional Storytelling in the Classroom: Individual versus Group Interaction between Children and Robots , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[20]  Brian Scassellati,et al.  Personalizing Robot Tutors to Individuals’ Learning Differences , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[21]  David Traum,et al.  The Information State Approach to Dialogue Management , 2003 .

[22]  C. Moore,et al.  Development of joint visual attention in infants , 1995 .

[23]  H. Wilson,et al.  Perception of head orientation , 2000, Vision Research.

[24]  L A Petitto,et al.  Visual sign phonology: insights into human reading and language from a natural soundless phonology. , 2016, Wiley interdisciplinary reviews. Cognitive science.

[25]  David Ostry,et al.  Language rhythms in baby hand movements , 2001, Nature.

[26]  L. Petitto,et al.  Babbling in the manual mode: evidence for the ontogeny of language. , 1991, Science.

[27]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[28]  J. Cassell,et al.  Authorable Virtual Peers for Autism Spectrum Disorders , 2006 .

[29]  B. Scassellati,et al.  Social eye gaze in human-robot interaction , 2017, J. Hum. Robot Interact..

[30]  P. Kuhl,et al.  Foreign-language experience in infancy: Effects of short-term exposure and social interaction on phonetic learning , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[31]  D. Ostry,et al.  Baby hands that move to the rhythm of language: hearing babies acquiring sign languages babble silently on the hands , 2004, Cognition.

[32]  Daniel R. Anderson,et al.  Television and Very Young Children , 2005 .

[33]  M. Anbar Physiological, clinical and psychological applications of dynamic infrared imaging , 2003, Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Cat. No.03CH37439).

[34]  Jeff Rickel,et al.  Virtual Humans for Team Training in Virtual Reality , 1999 .

[35]  Paul Debevec,et al.  The Light Stages and Their Applications to Photoreal Digital Actors , 2012, SIGGRAPH 2012.

[36]  Hatice Kose-Bagci,et al.  Socially Interactive Robotic Platforms as Sign Language Tutors , 2014, Int. J. Humanoid Robotics.

[37]  Mohamed Jemni,et al.  A Review on 3D Signing Avatars: Benefits, Uses and Challenges , 2013, Int. J. Multim. Data Eng. Manag..

[38]  Matt Huenerfauth,et al.  Evaluating Facial Expressions in American Sign Language Animations for Accessible Online Information , 2013, HCI.

[39]  Rajesh P. N. Rao,et al.  "Social" robots are psychological agents for infants: A test of gaze following , 2010, Neural Networks.

[40]  Arthur C. Graesser,et al.  AutoTutor: an intelligent tutoring system with mixed-initiative dialogue , 2005, IEEE Transactions on Education.

[41]  Dimitri A Christakis,et al.  The effects of infant media usage: what do we know and what should we learn? , 2009, Acta paediatrica.

[42]  Arcangelo Merla,et al.  New Frontiers for Applications of Thermal Infrared Imaging Devices: Computational Psychopshysiology in the Neurosciences , 2017, Sensors.

[43]  Ari Shapiro,et al.  Building a Character Animation System , 2011, MIG.

[44]  Rebekah A. Richert,et al.  Media as social partners: the social nature of young children's learning from screen media. , 2011, Child development.

[45]  Lynette van Zijl,et al.  The development of a generic signing avatar , 2007 .

[46]  Hatice Kose-Bagci,et al.  Evaluation of the Robot Assisted Sign Language Tutoring Using Video-Based Studies , 2012, International Journal of Social Robotics.

[47]  L. Petitto,et al.  Visual Sonority Modulates Infants’ Attraction to Sign Language , 2018, Language learning and development : the official journal of the Society for Language Development.

[48]  K. Lorenz,et al.  Die angeborenen Formen möglicher Erfahrung. , 2010 .

[49]  Brian Scassellati,et al.  The Physical Presence of a Robot Tutor Increases Cognitive Learning Gains , 2012, CogSci.

[50]  Brian Scassellati,et al.  The Benefits of Interactions with Physically Present Robots over Video-Displayed Agents , 2011, Int. J. Soc. Robotics.

[51]  Mark H. Johnson,et al.  Brain responses reveal young infants’ sensitivity to when a social partner follows their gaze , 2013, Developmental Cognitive Neuroscience.

[52]  Brian Scassellati,et al.  Bridging the research gap , 2012, HRI 2012.

[53]  Brian Scassellati,et al.  Socially Assistive Robotics: Methods and Implications for the Future of Work and Care , 2022, Robophilosophy.

[54]  Anton Leuski,et al.  Ada and Grace: Direct Interaction with Museum Visitors , 2012, IVA.

[55]  T. Kanda,et al.  Can we talk to robots? Ten-month-old infants expected interactive humanoid robots to be talked to by persons , 2005, Cognition.

[56]  Hatice Kose-Bagci,et al.  A New Robotic Platform for Sign Language Tutoring , 2015, International Journal of Social Robotics.

[57]  S. Langton,et al.  The influence of head contour and nose angle on the perception of eye-gaze direction , 2004, Perception & psychophysics.

[58]  Georgene L. Troseth,et al.  Do Babies Learn From Baby Media? , 2010, Psychological science.

[59]  Han-Pang Huang,et al.  Realization of sign language motion using a dual-arm/hand humanoid robot , 2016, Intelligent Service Robotics.

[60]  Randall W. Hill,et al.  Toward a New Generation of Virtual Humans for Interactive Experiences , 2002, IEEE Intell. Syst..

[61]  J. Snow,et al.  From the National Institute on Deafness and other Communication Disorders , 1992, Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery.

[62]  M. Bornstein,et al.  Maternal responsiveness and children's achievement of language milestones. , 2001, Child development.

[63]  A. Merla,et al.  Mom feels what her child feels: thermal signatures of vicarious autonomic response while watching children in a stressful situation , 2013, Front. Hum. Neurosci..

[64]  Marina Krcmar,et al.  Word Learning in Very Young Children From Infant‐Directed DVDs , 2011 .

[65]  Brian Scassellati,et al.  Shaping productive help-seeking behavior during robot-child tutoring interactions , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[66]  Maja J. Mataric,et al.  The role of physical embodiment in human-robot interaction , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[67]  Arcangelo Merla,et al.  Thermal expression of intersubjectivity offers new possibilities to human–machine and technologically mediated interactions , 2014, Front. Psychol..

[68]  Anton Leuski,et al.  From domain specification to virtual humans: an integrated approach to authoring tactical questioning characters , 2008, INTERSPEECH.

[69]  B. Scassellati,et al.  Robots for use in autism research. , 2012, Annual review of biomedical engineering.

[70]  Maja J. Mataric,et al.  Toward Personalized Pain Anxiety Reduction for Children , 2015, AAAI Fall Symposia.

[71]  Brian Scassellati,et al.  Narratives with Robots: The Impact of Interaction Context and Individual Differences on Story Recall and Emotional Understanding , 2017, Front. Robot. AI.

[72]  Rosalee Wolfe,et al.  Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language , 2013, GRAPP/IVAPP.

[73]  Alexis Héloir,et al.  Sign Language Avatars: Animation and Comprehensibility , 2011, IVA.

[74]  L. Petitto How the brain begets language , 2005 .