Vision Based Motion Tracking System for Interactive Entertainment Applications

Although the vision based motion recognition plays an important role in virtual reality and interactive entertainment area, it has not yet found wide usage. The first main reason is the unavailability of relatively low cost wireless tracking system that would operate well enough for inputting gestures. The second reason lies in the lack of naturalness in gesture design. That is, so far most gestures used are either static images or poses and too abstract losing its intended meaning and affordances, resulting in low usability and presence. In this paper, we propose architecture for a low cost re-configurable vision-based motion tracking system and motion recognition algorithm. The user wears one or more retro-reflective markers for achieving more exact tracking performance and make predefined motion gestures. Through object segmentation processing, 3D positions of the objects are computed. And then, a motion gesture corresponding on these 3D position trajectories is found by a simple correlation-based matching algorithm. Also, we demonstrate this system by applying it to virtual environment navigation and interactive entertainments.

[1]  Karl Rohr,et al.  Incremental recognition of pedestrians from image sequences , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Wen-Hsiang Tsai,et al.  Vision-based tracking and interpretation of human leg movement for virtual reality applications , 2001, IEEE Trans. Circuits Syst. Video Technol..

[3]  Emanuele Trucco,et al.  Introductory techniques for 3-D computer vision , 1998 .

[4]  M. Schuemie Presence: Interacting in VR? , 1999 .

[5]  Mubarak Shah,et al.  Recognizing human actions in a static room , 1998, Proceedings Fourth IEEE Workshop on Applications of Computer Vision. WACV'98 (Cat. No.98EX201).

[6]  Shaogang Gong,et al.  An incremental approach towards automatic model acquisition for human gesture recognition , 2000, Proceedings Workshop on Human Motion.

[7]  O. Faugeras Three-dimensional computer vision: a geometric viewpoint , 1993 .

[8]  Olivier D. Faugeras,et al.  The fundamental matrix: Theory, algorithms, and stability analysis , 2004, International Journal of Computer Vision.

[9]  Alan C. Schultz,et al.  Integrating natural language and gesture in a robotics domain , 1998, Proceedings of the 1998 IEEE International Symposium on Intelligent Control (ISIC) held jointly with IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA) Intell.

[10]  Junji Yamato,et al.  Recognizing human action in time-sequential images using hidden Markov model , 1992, Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[11]  Mel Slater,et al.  Representations Systems, Perceptual Position, and Presence in Immersive Virtual Environments , 1993, Presence: Teleoperators & Virtual Environments.

[12]  C. Creider Hand and Mind: What Gestures Reveal about Thought , 1994 .

[13]  S. P. Mudur,et al.  Three-dimensional computer vision: a geometric viewpoint , 1993 .

[14]  David Zeltzer,et al.  A survey of glove-based input , 1994, IEEE Computer Graphics and Applications.

[15]  William T. Freeman,et al.  Television control by hand gestures , 1994 .

[16]  Thad Starner,et al.  Visual Recognition of American Sign Language Using Hidden Markov Models. , 1995 .

[17]  Kirsti Grobel,et al.  Real-time hand-arm motion analysis using a single video camera , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[18]  Namgyu Kim,et al.  POSTRACK: a low cost real-time motion tracking system for VR application , 2001, Proceedings Seventh International Conference on Virtual Systems and Multimedia.

[19]  Roger Y. Tsai,et al.  A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses , 1987, IEEE J. Robotics Autom..