Visual recognition of gestures using dynamic naive Bayesian classifiers

Visual recognition of gestures is an important field of study in human-robot interaction research. Although there exist several approaches in order to recognize gestures, on-line learning of visual gestures does not have received the same special attention. For teaching a new gesture, a recognition model that can be trained with just a few examples is required. In this paper we propose an extension to naive Bayesian classifiers for gesture recognition that we call dynamic naive Bayesian classifiers. The observation variables in these combine motion and posture information of the user's right hand. We tested the model with a set of gestures for commanding a mobile robot, and compare it with hidden Markov models. When the number of training samples is high, the recognition rate is similar with both types of models; but when the number of training samples is low, dynamic naive classifiers have a better performance. We also show that the inclusion of posture attributes in the form of spatial relationships between the right hand and other parts of the human body improves the recognition rate in a significant way.

[1]  David Alan Becker,et al.  Sensei, a real-time recognition, feedback and training system for T'ai chi gestures , 1997 .

[2]  Horst-Michael Groß,et al.  Implementation and comparison of three architectures for gesture recognition , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).

[3]  Christian Borgelt,et al.  Graphical models - methods for data analysis and mining , 2002 .

[4]  Nir Friedman,et al.  Bayesian Network Classifiers , 1997, Machine Learning.

[5]  Bruce Blumberg,et al.  Adaptive models for the recognition of human gesture , 2000 .

[6]  Michael J. Swain,et al.  Perseus: an extensible vision system for human-machine interaction , 1996 .

[7]  A. Corradini,et al.  Dynamic time warping for off-line recognition of a small gesture vocabulary , 2001, Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems.

[8]  James W. Davis,et al.  The Representation and Recognition of Action Using Temporal Templates , 1997, CVPR 1997.

[9]  Thad Starner,et al.  Visual Recognition of American Sign Language Using Hidden Markov Models. , 1995 .

[10]  Rüdiger Dillmann,et al.  Dynamic gestures as an input device for directing a mobile platform , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[11]  David J. Spiegelhalter,et al.  Machine Learning, Neural and Statistical Classification , 2009 .

[12]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems , 1988 .

[13]  Pat Langley,et al.  An Analysis of Bayesian Classifiers , 1992, AAAI.

[14]  Luis Enrique Sucar,et al.  Continuous activity recognition with missing data , 2002, Object recognition supported by user interaction for service robots.

[15]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[16]  Donald O. Tanguay Hidden Markov models for gesture recognition , 1995 .

[17]  Michael J. Pazzani,et al.  Searching for Dependencies in Bayesian Classifiers , 1995, AISTATS.

[18]  James M. Rehg,et al.  Statistical Color Models with Application to Skin Detection , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[19]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[20]  Korten Kamp,et al.  Recognizing and interpreting gestures on a mobile robot , 1996, AAAI 1996.