A generic framework for the design of visual-based gesture control interface

Most visual-based gesture control systems are bound to specific applications. They used predefined postures for users to control devices. Users need to learn and be familiar with those predefined postures to issue a command. This makes it difficult to transfer one gesture control interface into different applications. This study presented a generic framework for the design of a human machine interface based on visual-based hand gesture recognition techniques. Experimental results showed that the proposed approach is feasible for practical applications.

[1]  Keechul Jung,et al.  Recognition-based gesture spotting in video games , 2004, Pattern Recognit. Lett..

[2]  Vladimir Pavlovic,et al.  Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Jakub Segen,et al.  Gesture VR: vision-based 3D hand interace for spatial interaction , 1998, MULTIMEDIA '98.

[4]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[5]  Qing Chen,et al.  Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free Grammar , 2008, IEEE Transactions on Instrumentation and Measurement.

[6]  N.D. Georganas,et al.  Real-time Vision-based Hand Gesture Recognition Using Haar-like Features , 2007, 2007 IEEE Instrumentation & Measurement Technology Conference IMTC 2007.

[7]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, International Journal of Computer Vision.

[8]  Chieh-Chih Wang,et al.  Hand posture recognition using adaboost with SIFT for human robot interaction , 2007 .

[9]  Lijun Xie,et al.  Visual Mouse: SIFT Detection and PCA Recognition , 2007, 2007 International Conference on Computational Intelligence and Security Workshops (CISW 2007).