EEG based classification of imagined vowel sounds
暂无分享,去创建一个
Researches indicate that electroencephalography (EEG) can be used to classify data of imagined speech. It can be further utilized to develop speech prosthesis and synthetic telepathy systems. The objective of this paper is to improve the classification performance in imagined speech by selecting the features that extract maximum discriminatory information from the data. The features extracted are variance, entropy and signal energy in the normalized frequency range of 0.5-0.9. EEG data from 3 healthy subjects who had imagined speaking vowel sounds /a/, /u/ and no action state as control in 50 trials for each subject was processed to extract features. Classification was done using linear, quadratic classifiers and nonlinear support vector machine. The classification accuracies obtained in this work ranges from 77.5-100%. This is a considerable improvement over the previous classification accuracies in the range of 56-82%, reported by DaSalla[1]. The results can be used to develop better speech prosthesis or a telepathy system where most of the information from imagined speech can be extracted.
[1] Makoto Sato,et al. Single-trial classification of vowel speech imagery using common spatial patterns , 2009, Neural Networks.
[2] D E Callan,et al. Single-sweep EEG analysis of neural processes underlying perception and production of vowels. , 2000, Brain research. Cognitive brain research.
[3] G. Pfurtscheller,et al. Brain-Computer Interfaces for Communication and Control. , 2011, Communications of the ACM.