Driving Fatigue Detection Combining Face Features with Physiological Information

Fatigue driving is one of the main reasons that cause sever accidents. It's necessary to detect fatigue state and warn drivers to avoid life-threatening accidents. There are many related technologies to detect fatigue, some of which based on physiological information or face features. However, biological indicators are difficult to analyze in real-time and the signal sensor is invasive while image-based approaches have relatively strong subjective. Hence, in this paper, a method combined physiological information and face features is employed. We use near-infrared spectroscopy (fNIRS) on behalf of physical states and eye and mouth condition representing face states. Firstly, Multi-Task Convolutional Neural Network (MTCNN) was used to extract image features and then a lightly classifier was designed to recognize the state of face states. Finally, we use Long Short-Term Memory (LSTM) model to fuse these characters and predict fatigue. Experiment results show that the method proposed have a high accuracy about 95.8% and fast speed about 6.12ms to detect fatigue.