An Auditory Output Brain–Computer Interface for Speech Communication

Understanding the neural mechanisms underlying speech production can aid the design and implementation of brain–computer interfaces for speech communication. Specifically, the act of speech production is unequivocally a motor behavior; speech arises from the precise activation of all of the muscles of the respiratory and vocal mechanisms. Speech also preferentially relies on auditory output to communicate information between conversation partners. However, self-perception of one’s own speech is also important for maintaining error-free speech and proper production of intended utterances. This chapter discusses our efforts to use motor cortical neural output during attempted speech production for control of a communication BCI device by an individual with locked-in syndrome while taking advantage of neural circuits used for learning and maintaining speech. The end result is a BCI capable of producing instantaneously vocalized output within a framework of motor-based brain-computer interfacing that provides appropriate auditory feedback to the user.

[1]  Jonathan R Wolpaw,et al.  Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. , 2004, Proceedings of the National Academy of Sciences of the United States of America.

[2]  Makoto Sato,et al.  Single-trial classification of vowel speech imagery using common spatial patterns , 2009, Neural Networks.

[3]  F. Guenther,et al.  Classification of Intended Phoneme Production from Chronic Intracortical Microelectrode Recordings in Speech-Motor Cortex , 2011, Front. Neurosci..

[4]  Srikantan S. Nagarajan,et al.  Speech Production as State Feedback Control , 2011, Front. Hum. Neurosci..

[5]  Nicholas P. Szrama,et al.  Using the electrocorticographic speech network to control a brain–computer interface in humans , 2011, Journal of neural engineering.

[6]  Frank H. Guenther,et al.  A neural network model of speech acquisition and motor equivalent speech production , 2004, Biological Cybernetics.

[7]  Rajesh P. N. Rao,et al.  Localization and classification of phonemes using high spatial resolution electrocorticography (ECoG) grids , 2008, 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[8]  D. McFarland,et al.  An auditory brain–computer interface (BCI) , 2008, Journal of Neuroscience Methods.

[9]  Frank H. Guenther,et al.  Brain-computer interfaces for speech communication , 2010, Speech Commun..

[10]  P. Kennedy The cone electrode: a long-term electrode that records from neurites grown onto its recording surface , 1989, Journal of Neuroscience Methods.

[11]  Bradley Greger,et al.  Decoding spoken words using local field potentials recorded from the cortical surface , 2010, Journal of neural engineering.

[12]  F. Guenther,et al.  A Wireless Brain-Machine Interface for Real-Time Speech Synthesis , 2009, PloS one.

[13]  G. Hickok Computational neuroanatomy of speech production , 2012, Nature Reviews Neuroscience.

[14]  Greg Gibson,et al.  Rare and common variants: twenty arguments , 2012, Nature Reviews Genetics.

[15]  F. Plum,et al.  The diagnosis of stupor and coma. , 1972, Contemporary neurology series.

[16]  Jon A. Mukand,et al.  Neuronal ensemble control of prosthetic devices by a human with tetraplegia , 2006, Nature.

[17]  Satrajit S. Ghosh,et al.  Neural modeling and imaging of the cortical interactions underlying syllable production , 2006, Brain and Language.

[18]  P. Kennedy,et al.  Neurotrophic electrode: Method of assembly and implantation into human motor speech cortex , 2008, Journal of Neuroscience Methods.