EEG based vowel classification during speech imagery

Electroencephalography (EEG) has long been used for Brain computer interface (BCI). Recent researches have proved that EEG can be also used to classify data generated in speech imagery. This classification can further be used to be implemented in the development of speech prosthesis and synthetic telepathy systems. In this paper a new algorithm has been applied to classify the imagined English vowel sounds. The algorithm is used to distinguish among 3 classes of English vowel sound /a/, /u/ and `rest' in pair-wise as well as `combination of two sounds (tasks)' manner. Simple time domain features viz standard deviation and waveform length have been used for classification. The proposed algorithm had been tested on 3 subjects and significant classification accuracies were obtained. The pair-wise classification accuracy was found to be 70-82.5% which is an improvement over the previous classification accuracy in the range of 56-82%, reported by DaSalla [4], on his own database. The `combination of tasks' classification accuracy was found to be 85-100%.