Camera-based gesture recognition for robot control

Several systems for automatic gesture recognition have been developed using different strategies and approaches. In these systems the recognition engine is mainly based on three algorithms: dynamic pattern matching, statistical classification, and neural networks (NN). In that paper we present four architectures for gesture-based interaction between a human being and an autonomous mobile robot using the above mentioned techniques or a hybrid combination of them. Each of our gesture recognition architecture consists of a preprocessor and a decoder. Three different hybrid stochastic/connectionist architectures are considered. A template matching problem by making use of dynamic programming techniques is dealt with; the strategy is to find the minimal distance between a continuous input feature sequence and the classes. Preliminary experiments with our baseline system achieved a recognition accuracy up to 92%. All systems use input from a monocular color video camera, and are user-independent but so far they are not in real-time yet.