In this paper, we describe an algorithm which can automatically recognize human gesture for Human-Robot interaction by utilizing attention control method. In early works, many systems for recognizing human gestures work under many restricted conditions. To solve the problem, we propose a novel model called APM(Active Plane Model),which can represent 3D and 2D gesture information simultaneously. Also we present the state transition algorithm for selection of attention. In the algorithm, first we obtain the information about 2D and 3D shape by deforming the APM, and then the feature vectors are extracted from the deformed APM. The next step is constructing a gesture space by analyzing the statistical information of training images with PCA. And then, input images are compared to the model and individually symbolized to one of the pose model in the space. In the last step, the symbolized poses are recognized with HMM as one of model gestures. The experimental results show that the proposed algorithm is very efficient to construct intelligent interface system.
[1]
H. Kobayashi,et al.
An efficient forward-backward algorithm for an explicit-duration hidden Markov model
,
2003,
IEEE Signal Processing Letters.
[2]
Sung H. Yoon,et al.
Gesture recognition in video image with combination of partial and global information
,
2003,
Visual Communications and Image Processing.
[3]
Tsuhan Chen,et al.
Tracking of multiple faces for human-computer interfaces and virtual environments
,
2000,
2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).
[4]
Matthew Turk,et al.
View-based interpretation of real-time optical flow for gesture recognition
,
1998,
Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.