Space-time gestures

A method for learning, tracking, and recognizing human gestures using a view-based approach to model articulated objects is presented. Objects are represented using sets of view models, rather than single templates. Stereotypical space-time patterns, i.e., gestures, are then matched to stored gesture patterns using dynamic time warping. Real-time performance is achieved by using special purpose correlation hardware and view prediction to prune as much of the search space as possible. Both view models and view predictions are learned from examples. Results showing tracking and recognition of human hand gestures at over 10 Hz are presented.<<ETX>>

[1]  S. Chiba,et al.  Dynamic programming algorithm optimization for spoken word recognition , 1978 .

[2]  J. Makhoul,et al.  Vector quantization in speech coding , 1985, Proceedings of the IEEE.

[3]  Michael S. Landy,et al.  Intelligible encoding of ASL image sequences at extremely low information rates , 1985, Comput. Vis. Graph. Image Process..

[4]  T. Poggio,et al.  A network that learns to recognize three-dimensional objects , 1990, Nature.

[5]  Ronen Basri,et al.  Recognition by Linear Combinations of Models , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  A. Torige,et al.  Human-interface by recognition of human gesture with image processing-recognition of gesture to specify moving direction , 1992, [1992] Proceedings IEEE International Workshop on Robot and Human Communication.

[7]  Roberto Cipolla,et al.  Qualitative Visual Interpretation of 3d Hand Gestures Using Motion Parallax , 1992, MVA.

[8]  Fumio Kishino,et al.  Real time hand shape recognition using pipe-line image processor , 1992, [1992] Proceedings IEEE International Workshop on Robot and Human Communication.

[9]  Yasuhito Suenaga,et al.  Real-Time Detection of Pointing Actions for a Glove-Free Interface , 1992, MVA.

[10]  Thomas M. Breuel,et al.  View-Based Recognition , 1992, MVA.