Controlling a prosthetic arm with a throat microphone

The aim of the present paper is to illustrate the phases of design and application of an innovative input source for an EMG upper limb prosthetic arm: a laryngophone, otherwise called throat microphone (t-mic). In the last years several different input sources were explored, from the implantable myoelectric sensors to the mechanomyographic sensors. The idea of controlling a prosthesis with vocal commands is quite recent but seems to be promising in helping users to better control their devices, improving the quality of their life.

[1]  Tom Chau,et al.  MMG-based multisensor data fusion for prosthesis control , 2003, Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Cat. No.03CH37439).

[2]  Marcelo A. Obando,et al.  The future application of the robotic arm (Automatic Endoscopic System for Optimal Positioning or AESOP) with voice recognition in sinus endoscopic surgery , 2003 .

[3]  Stefan Dobler,et al.  Speech recognition in the noisy car environment , 1989, Speech Commun..

[4]  V. Parenti-Castelli,et al.  Mechanical design of a prosthetic shoulder mechanism for upper limb amputees , 2005, 9th International Conference on Rehabilitation Robotics, 2005. ICORR 2005..

[5]  Heungkyu Lee,et al.  Competing models-based text-prompted speaker independent verification algorithm , 2006, Speech Commun..

[6]  Chih-Lung Lin,et al.  A speech controlled artificial limb based on DSP chip , 1998, Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Vol.20 Biomedical Engineering Towards the Year 2000 and Beyond (Cat. No.98CH36286).

[7]  P.R. Troyk,et al.  Technical Details of the Implantable Myoelectric Sensor (IMES) System for Multifunction Prosthesis Control , 2005, 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference.

[8]  Shunji Moromugi,et al.  Ultrasonic sensor disk for detecting muscular force , 2003, The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003..

[9]  J. Woodard,et al.  Selected military applications of automatic speech recognition technology , 1983, IEEE Communications Magazine.

[10]  M.C. Carrozza,et al.  A novel wearable interface for robotic hand prostheses , 2005, 9th International Conference on Rehabilitation Robotics, 2005. ICORR 2005..

[11]  David Ross. Starks Speech recognition in adverse environments: Improvements to IMELDA. , 1995 .

[12]  Francisco Rodríguez,et al.  Electronic control of a wheelchair guided by voice commands , 1995 .

[13]  Biing-Hwang Juang,et al.  Speech recognition in adverse environments , 1991 .

[14]  Eric Keller,et al.  Fundamentals of speech synthesis and speech recognition: basic concepts, state-of-the-art and future challenges , 1995 .

[15]  B. Yegnanarayana,et al.  Language identification in noisy environments using throat microphone signals , 2005, Proceedings of 2005 International Conference on Intelligent Sensing and Information Processing, 2005..

[16]  Sadaoki Furui,et al.  Toward Robust Speech Recognition and Understanding , 2003, J. VLSI Signal Process..

[17]  H. Franco,et al.  Combining standard and throat microphones for robust speech recognition , 2003, IEEE Signal Processing Letters.

[18]  Kevin Towers,et al.  VOICE RECOGNITION FOR PROSTHETIC CONTROL CASE STUDY , 2005 .

[19]  Dawn M. Taylor,et al.  Extraction algorithms for cortical control of arm prosthetics , 2001, Current Opinion in Neurobiology.

[20]  B. Beek,et al.  An assessment of the technology of automatic speech recognition for military applications , 1977 .

[21]  Lawrence E. Kinsler,et al.  Fundamentals of acoustics , 1950 .