Speech emotion recognition using auditory cortex

The importance of recognizing emotions from human speech has grown with the increasing role of spoken language interfaces in human-computer interaction applications. The extraction of emotional features from human speech and the classification of different emotion would require a more complex architecture of the human brain. This paper study novel neuro-psychologically inspired computational intelligence techniques that are able to mimic the learning process of the brain in formulating the emotion exhibited by the human under observation using the auditory cortex. We first extract the emotion features using Mel frequency cepstral coefficients (MFCC) from the sampled speech signal and detecting cross cultural emotions from the speech with a high degree of accuracy. Experimental results shows the capability of the architecture to detect and distinguish the emotional state of happiness from anger using data obtained from real-life, unobtrusive environment and an online call centre archive.