[An improved electroencephalogram feature extraction algorithm and its application in emotion recognition].

The result of the emotional state induced by music may provide theoretical support and help for assisted music therapy. The key to assessing the state of emotion is feature extraction of the emotional electroencephalogram (EEG). In this paper, we study the performance optimization of the feature extraction algorithm. A public multimodal database for emotion analysis using physiological signals (DEAP) proposed by Koelstra et al. was applied. Eight kinds of positive and negative emotions were extracted from the dataset, representing the data of fourteen channels from the different regions of brain. Based on wavelet transform, δ, θ, α and β rhythms were extracted. This paper analyzed and compared the performances of three kinds of EEG features for emotion classification, namely wavelet features (wavelet coefficients energy and wavelet entropy), approximate entropy and Hurst exponent. On this basis, an EEG feature fusion algorithm based on principal component analysis (PCA) was proposed. The principal component with a cumulative contribution rate more than 85% was retained, and the parameters which greatly varied in characteristic root were selected. The support vector machine was used to assess the state of emotion. The results showed that the average accuracy rates of emotional classification with wavelet features, approximate entropy and Hurst exponent were respectively 73.15%, 50.00% and 45.54%. By combining these three methods, the features fused with PCA possessed an accuracy of about 85%. The obtained classification accuracy by using the proposed fusion algorithm based on PCA was improved at least 12% than that by using single feature, providing assistance for emotional EEG feature extraction and music therapy.