Inner Speech Classification using EEG Signals: A Deep Learning Approach

Brain computer interfaces (BCIs) provide a direct communication pathway between humans and computers. There are three major BCI paradigms that are commonly employed: motor-imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). In our study, we sought to expand this by focusing on “Inner Speech” paradigm using EEG signals. Inner Speech refers to the internalized process of imagining one’s own “voice”. Using a 2D Convolutional Neural Network (CNN) based on the EEGNet architecture, we classified the EEG signals from eight subjects when they internally thought about four different words. Our results showed an average accuracy of 29.7% for word recognition, which is slightly above chance. We discuss the limitations and provide suggestions for future research.