Towards grounded human-robot communication

Future robots are expected to communicate with humans using natural language. The naive human user will expect a robot to easily understand what he/she is meaning by instructions concerning robot's tasks. This implies that the robot will need to have a means of grounding, in its own sensors, the natural language terms and constructions used by the human user. This paper presents an approach to solve this problem that is based on the integration of a "learning server" in the software architecture of the robot. Such server should be capable of on-line, incremental learning from examples; it should handle multiple problems concurrently and it should have meta-learning capabilities. A learning server already developed by the authors is presented. Complementarily, the dimensionality reduction problem is also addressed, using a Blocked DCT approach. Experimental results are obtained in a scenario in which three concepts (corresponding to natural language expressions) are concurrently learned.

[1]  Qiong Liu,et al.  Interactive and Incremental Learning via a Multisensory Mobile Robot , 2001 .

[2]  Antony Browne,et al.  Neural Network Perspectives on Cognition and Adaptive Robotics , 1997 .

[3]  Christopher G. Prince,et al.  Humanoid Theory Grounding , 2001 .

[4]  Monica N. Nicolescu,et al.  Learning and interacting in human-robot domains , 2001, IEEE Trans. Syst. Man Cybern. Part A.

[5]  Takeo Kanade,et al.  Neural Network-Based Face Detection , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Aude Billard,et al.  Experiments in Learning by Imitation - Grounding and Use of Communication in Robotic Agents , 1999, Adapt. Behav..

[7]  Stefan Schaal,et al.  Learning from Demonstration , 1996, NIPS.

[8]  Juyang Weng,et al.  An incremental learning method for face recognition under continuous video stream , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[9]  Tomaso A. Poggio,et al.  Example-Based Learning for View-Based Human Face Detection , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Juergen Luettin,et al.  Fast Face Detection using MLP and FFT , 1999 .

[11]  Luís Seabra Lopes,et al.  Semisentient robots: routes to integrated intelligence , 2001 .

[12]  Guido Bugmann,et al.  Training Personal Robots Using Natural Language Instruction , 2001, IEEE Intell. Syst..

[13]  Farhad Kamangar,et al.  Fast Algorithms for the 2-D Discrete Cosine Transform , 1982, IEEE Transactions on Computers.

[14]  António J. S. Teixeira,et al.  Human-robot interaction through spoken language dialogue , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[15]  Luis M. Camarinha-Matos,et al.  Feature Transformation Strategies for a Robot Learning Problem , 1998 .

[16]  Anders Krogh,et al.  A Simple Weight Decay Can Improve Generalization , 1991, NIPS.

[17]  Minoru Asada,et al.  Cognitive developmental robotics as a new paradigm for the design of humanoid robots , 2001, Robotics Auton. Syst..

[18]  Yoram Koren,et al.  Task-level tour plan generation for mobile robots , 1990, IEEE Trans. Syst. Man Cybern..

[19]  Luc Steels,et al.  Language games for autonomous robots , 2001 .

[20]  Juyang Weng,et al.  A theory for mentally developing robots , 2002, Proceedings 2nd International Conference on Development and Learning. ICDL 2002.

[21]  Luís Seabra Lopes Carl: from situated activity to language level interaction and learning , 2002, IROS.

[22]  Ying Wu,et al.  Robust real-time human hand localization by self-organizing color segmentation , 1999, Proceedings International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems. In Conjunction with ICCV'99 (Cat. No.PR00378).

[23]  Tomás Lozano-Pérez,et al.  Task-level planning of pick-and-place robot motions , 1989, Computer.

[24]  Satoru Hayamizu,et al.  Socially Embedded Learning of the Office-Conversant Mobil Robot Jijo-2 , 1997, IJCAI.