Neural Decoding of Imagined Speech and Visual Imagery as Intuitive Paradigms for BCI Communication

Brain-computer interface (BCI) is oriented toward intuitive systems that users can easily operate. Imagined speech and visual imagery are emerging paradigms that can directly convey a user's intention. We investigated the underlying characteristics that affect the decoding performance of these two paradigms. Twenty-two subjects performed imagined speech and visual imagery of twelve words/phrases frequently used for patients' communication. Spectral features were analyzed with thirteen-class classification (including rest class) using EEG filtered in six frequency ranges. In addition, cortical regions relevant to the two paradigms were analyzed by classification using single-channel and pre-defined cortical groups. Furthermore, we analyzed the word properties that affect the decoding performance based on the number of syllables, concrete and abstract concepts, and the correlation between the two paradigms. Finally, we investigated multiclass scalability in both paradigms. The high-frequency band displayed a significantly superior performance to that in the case of any other spectral features in the thirteen-class classification (imagined speech: 39.73 ± 5.64%; visual imagery: 40.14 ± 4.17%). Furthermore, the performance of Broca's and Wernicke's areas and auditory cortex was found to have improved among the cortical regions in both paradigms. As the number of classes increased, the decoding performance decreased moderately. Moreover, every subject exceeded the confidence level performance, implying the strength of the two paradigms in BCI inefficiency. These two intuitive paradigms were found to be highly effective for multiclass communication systems, having considerable similarities between each other. The results could provide crucial information for improving the decoding performance for practical BCI applications.