EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech

Imagined speech is an emerging paradigm for intuitive control of the brain-computer interface based communication system. Although the decoding performance of the imagined speech is improving with actively proposed architectures, the fundamental question about ‘what component are they decoding?’ is still remaining as a question mark. Considering that the imagined speech refers to an internal mechanism of producing speech, it may naturally resemble the distinct features of the overt speech. In this paper, we investigate the close relation of the spatial and temporal features between imagined speech and overt speech using electroencephalography signals. Based on the common spatial pattern feature, we acquired 16.2% and 59.9% of averaged thirteen-class classification accuracy (chance rate = 7.7%) for imagined speech and overt speech, respectively. Although the overt speech showed significantly higher classification performance compared to the imagined speech, we found potentially similar common spatial pattern of the identical classes of imagined speech and overt speech. Furthermore, in the temporal feature, we examined the analogous grand averaged potentials of the highly distinguished classes in the two speech paradigms. Specifically, the correlation of the amplitude between the imagined speech and the overt speech was 0.71 in the class with the highest true positive rate. The similar spatial and temporal features of the two paradigms may provide a key to the bottom-up decoding of imagined speech, implying the possibility of robust classification of multiclass imagined speech. It could be a milestone to comprehensive decoding of the speech-related paradigms, considering their underlying patterns.

[1]  Brian N. Pasley,et al.  Decoding spectrotemporal features of overt and covert speech from the human cortex , 2014, Front. Neuroeng..

[2]  Clemens Brunner,et al.  Better than random? A closer look on BCI results , 2008 .

[3]  Seong-Whan Lee,et al.  View-independent human action recognition with Volume Motion Template on single stereo camera , 2010, Pattern Recognit. Lett..

[4]  Lucas C Parra,et al.  EEG can predict speech intelligibility , 2019, Journal of neural engineering.

[5]  Xiaorong Gao,et al.  One-Versus-the-Rest(OVR) Algorithm: An Extension of Common Spatial Patterns(CSP) Algorithm to Multi-class Case , 2005, 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference.

[6]  Ji-Hoon Jeong,et al.  Trajectory Decoding of Arm Reaching Movement Imageries for Brain-Controlled Robot Arm System , 2019, 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).

[7]  Cuntai Guan,et al.  Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain–computer interface , 2007, NeuroImage.

[8]  K. Müller,et al.  Effect of higher frequency on the classification of steady-state visual evoked potentials , 2016, Journal of neural engineering.

[9]  B. Surawicz,et al.  Characteristics of true-positive and false-positive results of electrocardiographic Master twostep exercise tests. , 1958, The New England journal of medicine.

[10]  Stefan Haufe,et al.  Single-trial analysis and classification of ERP components — A tutorial , 2011, NeuroImage.

[11]  B. Casanova,et al.  Disturbed Glucose Metabolism in Rat Neurons Exposed to Cerebrospinal Fluid Obtained from Multiple Sclerosis Subjects , 2017, Brain sciences.

[12]  Panagiotis Artemiadis,et al.  Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features , 2018, Journal of neural engineering.

[13]  Mihaly Benda,et al.  Brain–Computer Interface Spellers: A Review , 2018, Brain sciences.

[14]  Tomislav Milekovic,et al.  Low-latency multi-threaded processing of neuronal signals for brain-computer interfaces , 2014, Front. Neuroeng..

[15]  Heung-Il Suk,et al.  Subject and class specific frequency bands selection for multiclass motor imagery classification , 2011, Int. J. Imaging Syst. Technol..

[16]  E Theodorsson-Norheim,et al.  Friedman and Quade tests: BASIC computer program to perform nonparametric two-way analysis of variance and multiple comparisons on ranks of several related samples. , 1987, Computers in biology and medicine.

[17]  Makoto Sato,et al.  Single-trial classification of vowel speech imagery using common spatial patterns , 2009, Neural Networks.

[18]  Luis Villaseñor Pineda,et al.  Transfer learning in imagined speech EEG-based BCIs , 2019, Biomed. Signal Process. Control..

[19]  Anna Gawlinski,et al.  Communication boards in critical care: patients' views. , 2006, Applied nursing research : ANR.

[20]  E. Henneman,et al.  Communication boards in critical care: patients' views B Lance Patak, RN, BSN, CCRN a , Anna Gawlinski, RN, DNSc b, *, Ng Irene Fung, RN, MSN, ACNPc, CCRN c , Lynn Doering, RN, DNSc d , , 2006 .

[21]  Chang-Hyun Park,et al.  Motor imagery learning across a sequence of trials in stroke patients. , 2015, Restorative neurology and neuroscience.

[22]  Tanja Schultz,et al.  Biosignal-Based Spoken Communication: A Survey , 2017, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[23]  Donald J. Berndt,et al.  Using Dynamic Time Warping to Find Patterns in Time Series , 1994, KDD Workshop.

[24]  Klaus-Robert Müller,et al.  A convolutional neural network for steady state visual evoked potential classification under ambulatory environment , 2017, PloS one.

[25]  G. Ruxton The unequal variance t-test is an underused alternative to Student's t-test and the Mann–Whitney U test , 2006 .

[26]  Frank Rudzicz,et al.  Classifying phonological categories in imagined and articulated speech , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[27]  Jaime Gómez Gil,et al.  Brain Computer Interfaces, a Review , 2012, Sensors.

[28]  Gert Pfurtscheller,et al.  Motor imagery and direct brain-computer communication , 2001, Proc. IEEE.

[29]  Michael H Kohrman,et al.  ECoG gamma activity during a language task: differentiating expressive and receptive speech areas. , 2008, Brain : a journal of neurology.

[30]  Boreom Lee,et al.  Multiclass Classification of Word Imagination Speech With Hybrid Connectivity Features , 2018, IEEE Transactions on Biomedical Engineering.

[31]  Elisabeth Ahlsén,et al.  Communication aids for people with aphasia , 2005, Logopedics, phoniatrics, vocology.

[32]  Ji-Hoon Jeong,et al.  Towards an EEG-based Intuitive BCI Communication System Using Imagined Speech and Visual Imagery , 2019, 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC).

[33]  S. Gielen,et al.  The brain–computer interface cycle , 2009, Journal of neural engineering.

[34]  Seong-Whan Lee,et al.  Connectivity differences between consciousness and unconsciousness in non-rapid eye movement sleep: a TMS–EEG study , 2019, Scientific Reports.

[35]  Nicholas P. Szrama,et al.  Using the electrocorticographic speech network to control a brain–computer interface in humans , 2011, Journal of neural engineering.

[36]  Klaus-Robert Müller,et al.  An Efficient ERP-Based Brain-Computer Interface Using Random Set Presentation and Face Familiarity , 2014, PloS one.

[37]  Tom Chau,et al.  EEG Classification of Covert Speech Using Regularized Neural Networks , 2017, IEEE/ACM Transactions on Audio, Speech, and Language Processing.