EEG based classification of imagined vowel sounds

Researches indicate that electroencephalography (EEG) can be used to classify data of imagined speech. It can be further utilized to develop speech prosthesis and synthetic telepathy systems. The objective of this paper is to improve the classification performance in imagined speech by selecting the features that extract maximum discriminatory information from the data. The features extracted are variance, entropy and signal energy in the normalized frequency range of 0.5-0.9. EEG data from 3 healthy subjects who had imagined speaking vowel sounds /a/, /u/ and no action state as control in 50 trials for each subject was processed to extract features. Classification was done using linear, quadratic classifiers and nonlinear support vector machine. The classification accuracies obtained in this work ranges from 77.5-100%. This is a considerable improvement over the previous classification accuracies in the range of 56-82%, reported by DaSalla[1]. The results can be used to develop better speech prosthesis or a telepathy system where most of the information from imagined speech can be extracted.