Where is this? - gesture based multimodal interaction with an anthropomorphic robot

Traditional visitor guidance often suffers from the representational gap between 2D map representations and the real-world. Therefore, we propose a robotic information system that exploits its physical embodiment to present a readily interpretable interface for visitor guidance. Similar to human receptionists, it offers a familiar point of reference that can be approached by visitors and supports intuitive interaction through both speech and gesture. We focus on employing an anthropomorphic body to improve guidance functionality and interpretability of the interaction. The map, which contains knowledge about the environment, is used by robot and visitor simultaneously, with the robot translating its content into gestures. This setting affords disambiguation of information requests and thus improves robustness. It has been tested both in a laboratory demonstration setting and in our university hall, where people asked for information and thereby used the system in a natural way.