Embodied solution: The world from a toddler’s point of view

An important goal in studying both human intelligence and artificial intelligence is an understanding of how a natural or artificial learning system deals with the uncertainty and ambiguity in the real world. We suggest that the relevant aspects in a learning environment for the learner are only those that make contact with the learnerpsilas sensory system. Moreover, in a real-world interaction, what the learner perceives in his sensory system critically depends on both his own and his social partnerpsilas actions, and his interactions with the world. In this way, the perception-action loops both within a learner and between the learner and his social partners may provide an embodied solution that significantly simplifies the social and physical learning environment, and filters irrelevant information for a current learning task which ultimately leads to successful learning. In light of this, we report new findings using a novel method that seeks to describe the visual learning environment from a young childpsilas point of view. The method consists of a multi-camera sensing environment consisting of two head-mounted mini cameras that are placed on both the childpsilas and the parentpsilas foreheads respectively. The main results are that (1) the adultpsilas and childpsilas views are fundamentally different when they interact in the same environment; (2) what the child perceives most often depends on his own actions and his social partnerpsilas actions; (3) the actions generated by both social partners provide more constrained and clean input to facilitate learning. These findings have broad implications for how one studies and thinks about human and artificial learning systems.

[1]  Marian Stewart Bartlett,et al.  New trends in Cognitive Science: Integrative approaches to learning and development , 2007, Neurocomputing.

[2]  E. Gibson Principles of Perceptual Learning and Development , 1969 .

[3]  Minoru Asada,et al.  Cognitive developmental robotics as a new paradigm for the design of humanoid robots , 2001, Robotics Auton. Syst..

[4]  James L. McClelland,et al.  Autonomous Mental Development by Robots and Animals , 2001, Science.

[5]  Linda B. Smith,et al.  The dynamic lift of developmental process. , 2007, Developmental science.

[6]  Chen Yu,et al.  The Role of Embodied Intention in Early Lexical Acquisition , 2005, Cogn. Sci..

[7]  Brian Scassellati,et al.  A Robot That Uses Existing Vocabulary to Infer Non-Visual Word Meanings from Observation , 2007, AAAI.

[8]  C. Teuscher,et al.  Gaze following: why (not) learn it? , 2006, Developmental science.

[9]  Brian Scassellati,et al.  Alternative Essences of Intelligence , 1998, AAAI/IAAI.

[10]  Brian Scassellati,et al.  Infant-like Social Interactions between a Robot and a Human Caregiver , 2000, Adapt. Behav..

[11]  Rajesh P. N. Rao,et al.  Embodiment is the foundation, not a level , 1996, Behavioral and Brain Sciences.

[12]  Linda B. Smith,et al.  What's in View for Toddlers? Using a Head Camera to Study Visual Experience. , 2008, Infancy : the official journal of the International Society on Infant Studies.

[13]  Luc Steels,et al.  Grounding adaptive language games in robotic agents , 1997 .