Multi modal gesture identification for HCI using surface EMG

Gesture and Speech comprise the most important modalities of human interaction. There has been a considerable amount of research attempts at incorporating these modalities for natural HCI. This involves challenge ranging from the low level signal processing of multi-modal input to the high level interpretation of natural speech and gesture in HCI. This paper proposes novel methods to recognize the hand gestures and unvoiced utterances using surface Electromyogram (sEMG) signals originating from different muscles. The focus of this work is to establish a simple, yet robust system that can be integrated to identify subtle complex hand gestures and unvoiced speech commands for control of prosthesis and other computer assisted devices. The proposed multi-modal system is able to identify the hand gestures and silent utterances using Independent Component Analysis (ICA) and Integral RMS (IRMS) of sEMG respectively. Training of the sEMG features was done using a designed ANN architecture and the results reported with overall recognition accuracy of 90.33%.

[1]  A. J. Fridlund,et al.  Guidelines for human electromyographic research. , 1986, Psychophysiology.

[2]  Takeo Kanade,et al.  DigitEyes: vision-based hand tracking for human-computer interaction , 1994, Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects.

[3]  Vladimir Pavlovic,et al.  Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  D. Stegeman,et al.  A surface EMG electrode for the simultaneous observation of multiple facial muscles , 2003, Journal of Neuroscience Methods.

[5]  Kevin Englehart,et al.  A multi-expert speech recognition system using acoustic and myoelectric signals , 2002, Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society] [Engineering in Medicine and Biology.

[6]  Erkki Oja,et al.  Independent Component Analysis , 2001 .

[7]  Thomas W. Parsons,et al.  Voice and Speech Processing , 1986 .

[8]  Charles Jorgensen,et al.  Gestures as Input: Neuroelectric Joysticks and Keyboards , 2003, IEEE Pervasive Comput..

[9]  Edward Hunter,et al.  Vision based hand gesture interpretation using recursive estimation , 1994, Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers.

[10]  S. Kumar,et al.  EMG based voice recognition , 2004, Proceedings of the 2004 Intelligent Sensors, Sensor Networks and Information Processing Conference, 2004..

[11]  L.J. Trejo,et al.  Multimodal neuroelectric interface development , 2013, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[12]  Toshiaki Sugimura,et al.  Speech recognition using EMG; mime speech recognition , 2003, INTERSPEECH.

[13]  Adrian D. C. Chan,et al.  Continuous myoelectric control for powered prostheses using hidden Markov models , 2005, IEEE Transactions on Biomedical Engineering.

[14]  Bruce C. Wheeler,et al.  EMG feature evaluation for movement control of upper extremity prostheses , 1995 .