Computational intelligence for human-friendly robot partners based on multi-modal communication

This paper discusses the multi-modal communication for robot partners based on computational intelligence in informationally structured space. First, we explain recognition methods of touch interface, voice recognition, human detection, gesture recognition used in the multi-modal communication. Furthermore, we propose a conversation system to realize the multi-modal communication with a person. Finally, we show several experimental results of the proposed method, and discuss the future direction on this research.

[1]  D. Sperber,et al.  Relevance: Communication and Cognition , 1989 .

[2]  N. Kubota,et al.  Structured learning for partner robots based on natural communication , 2008, 2008 IEEE Conference on Soft Computing in Industrial Applications.

[3]  Naoyuki Kubota,et al.  Information support system based on sensor networks , 2010, 2010 World Automation Congress.

[4]  Wulfram Gerstner,et al.  Spiking Neuron Models , 2002 .

[5]  Naoyuki Kubota,et al.  Topological environment reconstruction in informationally structured space for pocket robot partners , 2009, 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation - (CIRA).

[6]  Wulfram Gerstner,et al.  Spiking neurons , 1999 .