Speech and action: integration of action and language for mobile robots

We describe the tight integration of incremental natural language understanding, goal management, and action processing in a complex robotic architecture, which is required for natural interactions between robots and humans. Specifically, the natural language components need to process utterances while they are still spoken to be able to initiate feedback actions in a timely fashion, while the action manager might need information at various points during action execution that must be obtained from humans. We argue that a finer- grained integration provides much more natural human-robot interactions and much more reasonable multitasking.