Heterogeneous driver behavior state recognition using speech signal

Driver behavior is one of the major factors contributing to accident. Hence, if we are able to detect the abnormal driver behavior, we may be able to prevent such tragedy from happening. Based on the hypotheses that 1) driver behavior is influenced by emotion and 2) emotion can be measured using speech; we proposed an alternative way to recognize driver affective states. Emotion is assumed to be dynamic and changes gradually over time. Such assumption is collective agreed by psychologist that emotion can be represented using the affective space model. In this paper, we derived the affective space model dynamically using the emotional speech data based on three different culture bases, namely; American, European and Asian. This is to show that such approach is well generalized and can be adapted to different cultures as well. Experimental results show potential of applying such approach to determine driver behavior states (DBS); namely: sleepy, talking through cell phone, laughing while driving as well as normal driving.