This paper describes a Human-Robot interaction subsystem that is part of a robotics architecture, the ViRbot, used to control the operation of service mobile robots. The Human/Robot Interface subsystem consists of tree modules: Natural Language Understanding, Speech Generation and Robot's Facial Expressions. To demonstrate the utility of this Human-Robot interaction subsystem it is presented a set of applications that allows a user to command a mobile robot through spoken commands. The mobile robot accomplish the required commands using an actions planner and reactive behaviors. In the ViRbot architecture the actions planner module uses Conceptual Dependency (CD) primitives as the base for representing the problem domain. After a command is spoken a CD representation of it is generated, a rule base system takes this CD representation, and using the state of the environment generates other subtasks represented by CDs to accomplish the command. In this paper is also presented how to represent context through scripts. Using scripts it is easy to make inferences about events for which there are incomplete information or are ambiguous. Scripts serve to encode common sense knowledge. Scripts are also used to fill the gaps between seemingly unrelated events.
[1]
Steven L. Lytinen,et al.
CONCEPTUAL DEPENDENCY AND ITS DESCENDANTS
,
1992
.
[2]
Mark Billinghurst,et al.
The VirBot: a virtual reality robot driven with multimodal commands
,
1998
.
[3]
Jesus Savage-Carmona,et al.
A hybrid system with symbolic AI and statistical methods for speech recognition
,
1996
.
[4]
Peter Ford Dominey,et al.
Robot command, interrogation and teaching via social interaction
,
2005,
5th IEEE-RAS International Conference on Humanoid Robots, 2005..
[5]
Biing-Hwang Juang,et al.
Fundamentals of speech recognition
,
1993,
Prentice Hall signal processing series.
[6]
Philip R. Cohen.
The role of natural language in a multimodal interface
,
1992,
UIST '92.