Joint attention through the hands: Investigating the timing of object labeling in dyadic social interaction

Previous studies of joint attention and its' role in language learning have focused on eye-gaze cues. The goal of the present study is to discover fine-grained patterns of joint hand activities in child-parent social interaction that facilitate successful word learning. To this end, we address the following three topics: 1) quantifying joint manual actions between the parent and the child and in particular how the child follows the parent's bid of attention through manual actions; 2) discovering the timings between joint manual actions and object naming events; and 3) linking those timings with language learning results. Multiple high-resolution data streams were examined for episodes involving object-labeling events that either preceded or followed joint attentional focus as established through the hand actions of the dyad. Our findings suggest that the success of word learning through social interaction depends on the specific timing between follow-in joint hand activities and naming events.

[1]  Linda B. Smith,et al.  What's in View for Toddlers? Using a Head Camera to Study Visual Experience. , 2008, Infancy : the official journal of the International Society on Infant Studies.

[2]  Deb Roy,et al.  Grounded speech communication , 2000, INTERSPEECH.

[3]  Linda B. Smith,et al.  Active Information Selection: Visual Attention Through the Hands , 2009, IEEE Transactions on Autonomous Mental Development.

[4]  Marian Stewart Bartlett,et al.  New trends in Cognitive Science: Integrative approaches to learning and development , 2007, Neurocomputing.

[5]  G. Dawson,et al.  The screening and diagnosis of autistic spectrum disorders. , 2004, Journal of autism and developmental disorders.

[6]  Patricia Zukow-Goldring,et al.  A social ecological realist approach to the emergence of the lexicon: Educating attention to amodal invariants in gesture and speech. , 1997 .

[7]  Dare A. Baldwin,et al.  Understanding the link between joint attention and language. , 1995 .

[8]  C. Moore,et al.  Joint attention : its origins and role in development , 1995 .

[9]  Brian Scassellati,et al.  Infant-like Social Interactions between a Robot and a Human Caregiver , 2000, Adapt. Behav..

[10]  Minoru Asada,et al.  Cognitive developmental robotics as a new paradigm for the design of humanoid robots , 2001, Robotics Auton. Syst..

[11]  Deb Roy,et al.  Learning from multimodal observations , 2000, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).

[12]  L. Gogate,et al.  Attention to Maternal Multimodal Naming by 6- to 8-Month-Old Infants and Learning of Word-Object Relations. , 2006, Infancy : the official journal of the International Society on Infant Studies.

[13]  Matthias Scheutz,et al.  Investigating multimodal real-time patterns of joint attention in an hri word learning task , 2010, HRI 2010.

[14]  Zhengyou Zhang Autonomous Mental Development: A New Interdisciplinary Transactions for Natural and Artificial Intelligence , 2009, IEEE Trans. Auton. Ment. Dev..

[15]  Aude Billard,et al.  Experiments in social robotics: grounding and use of communication in autonomous agents , 2000 .

[16]  M. Tomasello,et al.  Joint attention and lexical acquisition style , 1983 .

[17]  Katharina J. Rohlfing,et al.  Socio-Pragmatics and Attention: Contributions to Gesturally Guided Word Learning in Toddlers , 2008 .