DBN versus HMM for Gesture Recognition in Human-Robot Interaction

Abstract: We designed an easy-to-use user interface based on speech and gesture modalities for controling an interactive robot. This paper, after a brief description of this interface and the platform on which it is implemented, describes an embedded gesture recognition system which is part of this multimodal interface. We describe two methods, namely Hidden Markov Models and Dynamic Bayesian Networks, and discuss their relative performance for this task in our Human-Robot interaction context. The implementation of our DBN-based recognition is outlined and some quantitative results are shown. I. INTRODUCTIONSince assistant robots are designed to directly interact with people, finding natural and easy-to-use user interfaces is of fundamental importance [1]. Nevertheless, few robotic systems are currently equipped with a completely on-board multimodal user interface enabling robot control through communication channels like speech, gesture or both. The most advanced one is [2] in which a constraint based multimodal system for speech and 3D pointing gestures has been developed, but gesture recognition is limited to mono-manual pointing gestures. In other works, like [3] and [4], gesture recognition is often extracted from monocular images, loosing the depth information and thus losing the capability of dealing with a pointing gesture other than directional. With the intention of providing our interactive robot called Jido with such an interface, we developed both speech and gesture recognition systems as well as a module for fusing these two information results. This merging step enables to:− complete an underspecified sentence, an abbreviation or an omission, which is usual in human communication particularly if a gesture can be done or even used instead− strengthen each modality by improving the classification rates of multimodal commands thanks to a probabilistic merge of gesture and speech recognition results.In this framework, this paper focuses on our one- and two-handed gesture recognition system given the video stream delivered by the on-board stereo head, with the physical constraints imposed by autonomous robotic systems in background: mobility of the platform, limited and shared computational power, limited memory capacities, etc.First section describes as a background our platform and the interface we developed on it, leading to an explanation of our needs in gesture recognition. Next, we discuss the relative performance of Hidden Markov Models (HMM) and Dynamic Bayesian Networks (DBN) for such a task, given the output of our 3D visual tracker devoted to the upper human body extremities [5]. Then, the implementation of our DBN-based recognition is outlined. We describe more precisely the data clustering process which is carried out thanks to a Kohonen network, the model training made by means of an Expectation-Maximization based algorithm and the recognition performed using particle filtering [6]. Finally, some qualitative and quantitative results from a symbolic and deictic gesture database are presented. The DBN representation, which is commonly used for human activity recognition, is shown to outperform the HMM representation especially in terms of CPU time consuming and gesture segmentation.

[1]  Alexander H. Waibel,et al.  Confidence based multimodal fusion for person identification , 2008, ACM Multimedia.

[2]  Malik Ghallab,et al.  Robot introspection through learned hidden Markov models , 2006, Artif. Intell..

[3]  Yoshinori Kuno,et al.  Mutual assistance between speech and vision for human-robot interface , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Malik Ghallab,et al.  Learning Behaviors Models for Robot Execution Control , 2006, ICAPS.

[5]  R. Dillmann,et al.  Using gesture and speech control for commanding a robot assistant , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[6]  Illah R. Nourbakhsh,et al.  A survey of socially interactive robots , 2003, Robotics Auton. Syst..

[7]  Alexander H. Waibel,et al.  Natural human-robot interaction using speech, head pose and gestures , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[8]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[9]  Youtian Du,et al.  Recognizing Interaction Activities using Dynamic Bayesian Network , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[10]  Vladimir Pavlovic,et al.  Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Frédéric Lerasle,et al.  Mutual assistance between speech and vision for human-robot interaction , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.