Bimodal emotion recognition based on adaptive weights

Emotion recognition is one of the most important issues in human-computer interactions(HCI).This paper describes a bimodal emotion recognition approach using a boosting-based framework to automatically determine the adaptive weights for audio and visual features.The system dynamically balances the importance of the audio and visual features at the feature level to obtain better performance.The tracking accuracy of the facial feature points is based on the traditional KLT algorithm integrated with the point distribution model(PDM) to guide analysis of the deformation of facial features.Experiments show the validity and effectiveness of the method,with a recognition rate over 84%.