Visual perception for a partner robot based on computational intelligence

This paper proposes a method for visual perception for a partner robot interacting with a human. A robot with a physical body should extract information by using prediction based on the dynamics of its environment, because the computational cost can be reduced, imitation is a powerful tool for gestural interaction between children and for teaching behaviors to children by parent. Furthermore, others' action can be a hint for obtaining a new behavior that might not be the same as the original action. This paper proposes a visual perception method for a partner robot based on the interactive teaching mechanism of a human teacher. The proposed method is composed of a spiking neural network, a self-organizing map, a steady-state genetic algorithm, and softmax action selection strategy. Furthermore, we discuss the interactive learning of a human and a partner robot based on the proposed method through several experiment results.