In the field of advanced service robotics it is of major importance to design man-machine interfaces that are fast and robust enough to cope with fuzzy and rapid human gesture. This paper describes our implementation of a complex interactive behavior on our multi-degree-of-freedom robot Arnold. The desired behavior is to position Arnold in front of a person and reach for the human hand, making use of all degrees of freedom the robot possesses. It is easy to see that this is a basic component in tasks where a man-machine interaction is required, e.g. passing objects or learning by showing. The complex overall behaviour can be decomposed into simple behaviors that are related to the basic robot devices, i.e. platform, arm and head. These processes act in combination without explicit exchange of information but their effect on the sensor input, following the dynamic approach described by G. Schoner et al. (1996). A fast perception of behavioral relevant information is achieved by a combination of color- and stereo-vision-algorithms in an active vision process. Closing the feedback loop by observing the environment demonstrates a good coordination of active vision and positioning of hand and platform in real time. We demonstrate that using a rapid processing of the visual input of an active camera system and controlling a humanoid robot closed loop by distributed dynamical systems is a promising way to perform robust and fast man-machine interaction.
[1]
Rodney A. Brooks,et al.
From earwigs to humans
,
1997,
Robotics Auton. Syst..
[2]
P. Maes.
Modeling adaptive autonomous agents
,
1993
.
[3]
A. Steinhage,et al.
Dynamical systems for vision-based autonomous mobile
,
1996
.
[4]
Gregor Schöner,et al.
Dynamics of behavior: Theory and applications for autonomous robot architectures
,
1995,
Robotics Auton. Syst..
[5]
Michael Dose.
Wegeplanung autonomer mobiler Roboter mittels dynamischer Systeme
,
1995
.
[6]
Christian Goerick.
Local Orientation Coding and Adaptive Thresholding for Real Time Early Vision
,
1994
.