Visual perception and reproduction for imitative learning of a partner robot

This paper proposes visual perception and model reproduction based on imitation of a partner robot interacting with a human. First of all, we discuss the role of imitation, and propose the method for imitative behavior generation. After the robot searches for a human by using a CCD camera, human hand positions are extracted from a series of images taken from the CCD camera. Next, the position sequence of the extracted human hand is used as inputs to a fuzzy spiking neural network to recognize the position sequence as a motion pattern. The trajectory for the robot behavior is generated and updated by a steady-state genetic algorithm based on the human motions pattern. Furthermore, a self-organizing map is used for clustering human hand motion patterns. Finally, we show experimental results of imitative behavior generation through interaction with a human.