Despite increased activity in robotics, relatively few advances have been made in the area of human-robot interaction. The most successful interfaces in the recent RoboCup Rescue competition were teleoperational interfaces. However, some believe that teams of robots under supervisory control may ultimately lead to better performance in real world operations. Such robots would be commanded with high-level commands rather than batch sequences of low-level commands. For humans to command teams of semi-autonomous robots in a dynamically changing environment, the human-robot interface will need to include several aspects of humanhuman communication. These aspects include cooperatively detecting and resolving problems, making using of conversational and situational context, maintaining contexts across multiple conversations and use of verbal and non-verbal information. This paper describes a demonstration system and dialogue architecture for the multimodal control of robots that is flexibly adaptable to accommodate capabilities and limitations on both PDA and kiosk environments.
[1]
Alan C. Schultz,et al.
Integrating natural language and gesture in a robotics domain
,
1998,
Proceedings of the 1998 IEEE International Symposium on Intelligent Control (ISIC) held jointly with IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA) Intell.
[2]
Charles E. Thorpe,et al.
Robot as Partner: Vehicle Teleoperation with Collaborative Control
,
2002
.
[3]
Beth Ann Hockey,et al.
Turning Speech Into Scripts
,
2000,
ArXiv.
[4]
Beth Ann Hockey,et al.
A compact architecture for dialogue management based on scripts and meta-outputs
,
2000
.
[5]
Jochen Triesch,et al.
A gesture interface for human-robot-interaction
,
1998,
Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.