Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm
暂无分享,去创建一个
Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.
[1] Chang Suk Kim,et al. Scatternet Formation Algorithm based on Relative Neighborhood Graph , 2008, Int. J. Fuzzy Log. Intell. Syst..
[2] Sungshin Kim,et al. Smart Cargo Monitoring System Based on Decision Support System for Liquid Carrier Tanker , 2008, Int. J. Fuzzy Log. Intell. Syst..
[3] Hidetomo Ichihashi,et al. Item-based Collaborative Filtering by Linear Fuzzy Clustering , 2007 .