CHAD: A Chinese Affective Database

Affective database plays an important role in the process of affective computing which has been an attractive field of AI research. Based on analyzing current databases, a Chinese affective database (CHAD) is designed and established for seven emotion states: neutral, happy, sad, fear, angry, surprise and disgust. Instead of choosing the personal suggestion method, audiovisual materials are collected in four ways including three types of laboratory recording and movies. Broadcast programmes are also included as source of vocal corpus. By comparison of the five sources two points are gained. First, although broadcast programmes get the best performance in listening experiment, there are still problems as copyright, lacking visual information and can not represent the characteristics of speech in daily life. Second, laboratory recording using sentences with appropriately emotional content is an outstanding source of materials which has a comparable performance with broadcasts.

[1]  Shrikanth S. Narayanan,et al.  Combining acoustic and language information for emotion recognition , 2002, INTERSPEECH.

[2]  R. Cowie,et al.  A new emotion database: considerations, sources and scope , 2000 .

[3]  P. Roach,et al.  Transcription of Prosodic and Paralinguistic Features of Emotional Speech , 1998, Journal of the International Phonetic Association.

[4]  Synnöve Carlson,et al.  Conveyance of emotional connotations by a single word in English , 2005, Speech Commun..

[5]  E. Vesterinen,et al.  Affective Computing , 2009, Encyclopedia of Biometrics.

[6]  C. Darwin The Expression of the Emotions in Man and Animals , .

[7]  N. Amir,et al.  Analysis of an emotional speech corpus in Hebrew based on objective criteria , 2000 .

[8]  Valery A. Petrushin,et al.  RUSLANA: a database of Russian emotional utterances , 2002, INTERSPEECH.

[9]  Iain R. Murray,et al.  Toward the simulation of emotion in synthetic speech: a review of the literature on human vocal emotion. , 1993, The Journal of the Acoustical Society of America.

[10]  George N. Votsis,et al.  Emotion recognition in human-computer interaction , 2001, IEEE Signal Process. Mag..

[11]  Roddy Cowie,et al.  Automatic recognition of emotion from voice: a rough benchmark , 2000 .

[12]  Carlo Drioli,et al.  Modifications of phonetic labial targets in emotive speech: effects of the co-production of speech and emotions , 2004, Speech Commun..

[13]  Say Wei Foo,et al.  Speech emotion recognition using hidden Markov models , 2003, Speech Commun..

[14]  Yang Li,et al.  Recognizing emotions in speech using short-term and long-term features , 1998, ICSLP.

[15]  Frank Dellaert,et al.  Recognizing emotion in speech , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.