Classification of Imagined Speech Using Siamese Neural Network

Imagined speech is spotlighted as a new trend in the brain-machine interface due to its application as an intuitive communication tool. However, previous studies have shown low classification performance, therefore its use in real-life is not feasible. In addition, no suitable method to analyze it has been found. Recently, deep learning algorithms have been applied to this paradigm. However, due to the small amount of data, the increase in classification performance is limited. To tackle these issues, in this study, we proposed an end-to-end framework using Siamese neural network encoder, which learns the discriminant features by considering the distance between classes. The imagined words (e.g., arriba (up), abajo (down), derecha (right), izquierda (left), adelante (forward), and atrás (backward)) were classified using the raw electroencephalography (EEG) signals. We obtained a 6-class classification accuracy of 31.40 ± 2.73% for imagined speech, which significantly outperformed other methods. This was possible because the Siamese neural network, which increases the distance between dissimilar samples while decreasing the distance between similar samples, was used. In this regard, our method can learn discriminant features from a small dataset. The proposed framework would help to increase the classification performance of imagined speech for a small amount of data and implement an intuitive communication system.

[1]  John Williamson,et al.  EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy , 2019, GigaScience.

[2]  Seong-Whan Lee,et al.  EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech , 2019, ACPR.

[3]  Heung-Il Suk,et al.  Subject and class specific frequency bands selection for multiclass motor imagery classification , 2011, Int. J. Imaging Syst. Technol..

[4]  Seong-Whan Lee,et al.  Network Properties in Transitions of Consciousness during Propofol-induced Sedation , 2017, Scientific Reports.

[5]  Seong-Whan Lee,et al.  Changes of Functional and Effective Connectivity in Smoking Replenishment on Deprived Heavy Smokers: A Resting-State fMRI Study , 2013, PloS one.

[6]  Damien Coyle,et al.  Neurolinguistics Research Advancing Development of a Direct-Speech Brain-Computer Interface , 2018, iScience.

[7]  Damien Coyle,et al.  Classification of imagined spoken Word-Pairs using Convolutional Neural Networks , 2019, GBCIC.

[8]  Luis Villaseñor Pineda,et al.  Transfer learning in imagined speech EEG-based BCIs , 2019, Biomed. Signal Process. Control..

[9]  Wolfram Burgard,et al.  Deep learning with convolutional neural networks for EEG decoding and visualization , 2017, Human brain mapping.

[10]  Jun Qin,et al.  Neural networks based EEG-Speech Models , 2016, ArXiv.

[11]  Gregory R. Koch,et al.  Siamese Neural Networks for One-Shot Image Recognition , 2015 .

[12]  Raffaella Folli,et al.  Optimizing Layers Improves CNN Generalization and Transfer Learning for Imagined Speech Decoding from EEG , 2019, 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC).

[13]  Hyoung Joong Kim,et al.  A High-Security EEG-Based Login System with RSVP Stimuli and Dry Electrodes , 2016, IEEE Transactions on Information Forensics and Security.

[14]  John Williamson,et al.  A High Performance Spelling System based on EEG-EOG Signals With Visual Feedback , 2018, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[15]  Seong-Whan Lee,et al.  Commanding a Brain-Controlled Wheelchair Using Steady-State Somatosensory Evoked Potentials , 2018, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[16]  Muhammad Abdul-Mageed,et al.  SPEAK YOUR MIND! Towards Imagined Speech Recognition With Hierarchical Deep Learning , 2019, INTERSPEECH.

[17]  Klaus-Robert Müller,et al.  Motion-Based Rapid Serial Visual Presentation for Gaze-Independent Brain-Computer Interfaces , 2018, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[18]  E-J Hoogerwerf,et al.  Clinical evaluation of BrainTree, a motor imagery hybrid BCI speller , 2014, Journal of neural engineering.

[19]  Iván E. Gareis,et al.  Open access database of EEG signals recorded during imagined speech , 2017, Symposium on Medical Information Processing and Analysis.

[20]  Guillermo Sapiro,et al.  Deep learning? , 1999 .

[21]  Boreom Lee,et al.  Multiclass Classification of Word Imagination Speech With Hybrid Connectivity Features , 2018, IEEE Transactions on Biomedical Engineering.

[22]  Ji-Hoon Jeong,et al.  Towards an EEG-based Intuitive BCI Communication System Using Imagined Speech and Visual Imagery , 2019, 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC).

[23]  Seong-Whan Lee,et al.  Movement intention decoding based on deep learning for multiuser myoelectric interfaces , 2016, 2016 4th International Winter Conference on Brain-Computer Interface (BCI).

[24]  Amir Asif,et al.  Siamese Neural Networks for EEG-based Brain-computer Interfaces , 2020, 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC).

[25]  Frank Rudzicz,et al.  Classifying phonological categories in imagined and articulated speech , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[26]  Hasan Şakir Bilge,et al.  Deep Metric Learning: A Survey , 2019, Symmetry.

[27]  Panagiotis Artemiadis,et al.  Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features , 2018, Journal of neural engineering.

[28]  Makoto Sato,et al.  Single-trial classification of vowel speech imagery using common spatial patterns , 2009, Neural Networks.

[29]  Sebastian Stober,et al.  Learning discriminative features from electroencephalography recordings by encoding similarity constraints , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[30]  S. Coyle,et al.  Brain–computer interfaces: a review , 2003 .

[31]  Sidney Fels,et al.  Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[32]  Li Wang,et al.  Analysis and classification of hybrid BCI based on motor imagery and speech imagery , 2019 .

[33]  S. Keene,et al.  A learned embedding space for EEG signal clustering , 2017, 2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB).