An Examination of the Static to Dynamic Imitation Spectrum

We consider the issues that arise from an examination of the continuum between two social learning paradigms that are widely used in robotics research: (i) following or matched-dependent behaviour and (ii) static observational learning. We use physical robots with minimal sensory capabilities and exploit controllers using neural network based methods for agent-centred perception of model angle and distance. The robot is first trained to perceive the dynamic movement of a robot model carrying a light source, then the robot learns by observing the model demonstrate a behaviour and finally it attempts to re-enact the learnt behaviour. Our results indicate that a dynamic observation using rotation performs significantly better than static observation. However given the embodiment of the robot a dynamic strategy using both rotational and translational movement becomes more problematic. We give reasons for this, discuss lessons learned for combining these types of social learning and make suggestions for requirements for imitator robots using dynamic observation.

[1]  Robert Hecht-Nielsen,et al.  Applications of counterpropagation networks , 1988, Neural Networks.

[2]  Chrystopher L. Nehaniv,et al.  Synchrony and perception in robotic imitation across embodiments , 2003, Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium (Cat. No.03EX694).

[3]  K. Dautenhahn,et al.  Trying to imitate-a step towards releasing robots from social isolation , 1994, Proceedings of PerAc '94. From Perception to Action.

[4]  Ulrich Nehmzow Animal and Robot Navigation , 1995 .

[5]  Aude Billard,et al.  Grounding communication in situated, social robots , 1997 .

[6]  Ronald C. Arkin,et al.  An Behavior-based Robotics , 1998 .

[7]  R. W. Mitchell,et al.  A Comparative-Developmental Approach to Understanding Imitation , 1987 .

[8]  C. Heyes,et al.  Social learning in animals : the roots of culture , 1996 .

[9]  L. Steels The Biology and Technology of Intelligent Autonomous Agents , 1995, NATO ASI Series.

[10]  G. Rizzolatti,et al.  Action recognition in the premotor cortex. , 1996, Brain : a journal of neurology.

[11]  Robin R. Murphy,et al.  Introduction to AI Robotics , 2000 .

[12]  Dearborn Animal Intelligence: An Experimental Study of the Associative Processes in Animals , 1900 .

[13]  Maja J. Mataric,et al.  Movement control methods for complex, dynamically simulated agents: Adonis dances the Macarena , 1998, AGENTS '98.

[14]  John Cohen Social Learning and Imitation , 1945, Nature.

[15]  Thomas R. Zentall,et al.  IMITATION IN ANIMALS: EVIDENCE, FUNCTION, AND MECHANISMS , 2001, Cybern. Syst..

[16]  K. Dautenhahn,et al.  Imitation in Animals and Artifacts , 2002 .

[17]  Gillian M. Hayes,et al.  A Robot Controller Using Learning by Imitation , 1994 .

[18]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[19]  J. Saunders,et al.  An experimental comparison of imitation paradigms used in social robotics , 2004, RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759).

[20]  Sorin Moga,et al.  From Perception-Action Loops to Imitation Processes: A Bottom-Up Approach of Learning by Imitation , 1998, Appl. Artif. Intell..