Can Motionese Tell Infants and Robots “ What to Imitate ” ?

An open question in imitating actions by infants and robots is how they know “what to imitate.” We suggest that parental modifications in their actions, called motionese, can help infants and robots to detect the meaningful structure of the actions. Parents tend to modify their infant-directed actions, e.g., put longer pauses between actions and exaggerate actions, which are assumed to help infants to understand the meaning and the structure of the actions. To investigate how such modifications contribute to the infants’ understanding of the actions, we analyzed parental actions from an infantlike viewpoint by applying a model of saliency-based visual attention. Our model of an infant-like viewpoint does not suppose any a priori knowledge about actions or objects used in the actions, or any specific capability to detect a parent’s face or his/her hands. Instead, it is able to detect and gaze at salient locations, which are standing out from the surroundings because of the primitive visual features, in a scene. The model thus demonstrates what low-level aspects of parental actions are highlighted in their action sequences and could attract the attention of young infants and robots. Our quantitative analysis revealed that motionese can help them (1) to receive immediate social feedback on the actions, (2) to detect the initial and goal states of the actions, and (3) to look at the static features of the objects used in the actions. We discuss these results addressing the issue of “what to imitate.”

[1]  A. Meltzoff,et al.  Imitation of Facial and Manual Gestures by Human Neonates , 1977, Science.

[2]  J. Jacobson,et al.  Paralinguistic Features of Adult Speech to Infants and Small Children , 1983 .

[3]  A. Meltzoff,et al.  Imitation in Newborn Infants: Exploring the Range of Gestures Imitated and the Underlying Mechanisms. , 1989, Developmental psychology.

[4]  Nobuo Masataka,et al.  Motherese in a signed language , 1992 .

[5]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[6]  Nobuo Masataka,et al.  Perception of motherese in a signed language by 6-month-old deaf infants. , 1996 .

[7]  N. Masataka,et al.  Perception of motherese in Japanese sign language by 6-month-old hearing infants. , 1998, Developmental psychology.

[8]  M. C. Caselli,et al.  Gesturing in mother-child interactions * , 1999 .

[9]  Brian Scassellati,et al.  A Context-Dependent Attention System for a Social Robot , 1999, IJCAI.

[10]  Stefan Schaal,et al.  Is imitation learning the route to humanoid robots? , 1999, Trends in Cognitive Sciences.

[11]  Ales Ude,et al.  Automatic Generation of Kinematic Models for the Conversion of Human Motion Capture Data into Humanoid Robot Motion , 2000 .

[12]  Kerstin Dautenhahn,et al.  Of hummingbirds and helicopters: An algebraic framework for interdisciplinary studies of imitation a , 2000 .

[13]  L. Gogate,et al.  A study of multimodal motherese: the role of temporal synchrony between verbal labels and gestures. , 2000, Child development.

[14]  L. Gogate,et al.  Intersensory Redundancy and 7-Month-Old Infants' Memory for Arbitrary Syllable-Object Relations , 2001 .

[15]  Aude Billard,et al.  LEARNING MOTOR SKILLS BY IMITATION: A BIOLOGICALLY INSPIRED ROBOTIC MODEL , 2001, Cybern. Syst..

[16]  Minoru Asada,et al.  Cognitive developmental robotics as a new paradigm for the design of humanoid robots , 2001, Robotics Auton. Syst..

[17]  Chrystopher L. Nehaniv,et al.  Like Me?- Measures of Correspondence and Imitation , 2001, Cybern. Syst..

[18]  C. Breazeal,et al.  Challenges in building robots that imitate people , 2002 .

[19]  C. Breazeal,et al.  Robots that imitate humans , 2002, Trends in Cognitive Sciences.

[20]  Dare A. Baldwin,et al.  Evidence for ‘motionese’: modifications in mothers’ infant-directed action , 2002 .

[21]  Chrystopher L. Nehaniv,et al.  Imitation as a Dual-Route Process Featuring Predictive and Learning Components: A Biologically Plausible Computational Model , 2002 .

[22]  Kerstin Dautenhahn,et al.  Challenges in Building Robots That Imitate People , 2002 .

[23]  Wolfgang Prinz,et al.  The early origins of goal attribution in infancy , 2003, Consciousness and Cognition.

[24]  Laurent Itti,et al.  Realistic avatar eye and head animation using a neurobiological model of visual attention , 2004, SPIE Optics + Photonics.

[25]  Jessica A. Sommerville,et al.  Pulling out the intentional structure of action: the relation between action processing and action production in infancy , 2005, Cognition.

[26]  Jannik Fritsch,et al.  Detecting ‘When to Imitate’ in a Social Context with a Human Caregiver , 2005 .

[27]  Y. Nagai Joint Attention Development in Infant-like Robot based on Head Movement Imitation , 2005 .

[28]  Aude Billard,et al.  Goal-Directed Imitation in a Humanoid Robot , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[29]  D. Archer,et al.  Infancy , 1997, Journal of Primary Prevention.

[30]  Jannik Fritsch,et al.  Kernel particle filter for real-time 3D body tracking in monocular color images , 2006, 7th International Conference on Automatic Face and Gesture Recognition (FGR06).

[31]  Chrystopher L. Nehaniv,et al.  Action, State and Effect Metrics for Robot Imitation , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[32]  Katharina J. Rohlfing,et al.  How can multimodal cues from child-directed interaction reduce learning complexity in robots? , 2006, Adv. Robotics.

[33]  V. Hafner,et al.  Imitation Behaviour Evaluation in Human Robot Interaction , 2006 .

[34]  Aude Billard,et al.  Discriminative and adaptive imitation in uni-manual and bi-manual tasks , 2006, Robotics Auton. Syst..

[35]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .