Hypovigilence analysis: open or closed eye or mouth? Blinking or yawning frequency?

This paper proposes a frequency method to estimate the state open or closed of eye and mouth and to detect associated motion events such as blinking and yawning. The context of that work is the detection of hypovigilence state of a user such as a driver, a pilot. In A. Benoit and Caplier (2005) we proposed a method for motion detection and estimation which is based on the processing achieved by the human visual system. The motion analysis algorithm the filtering step occurring at the retina level and the analysis done at the visual cortex level. This method is used to estimate the motion of eye and mouth: blinking is related to fast vertical motion of the eyelid and yawning is related to large vertical mouth opening. The detection of the open or closed state of the feature is based on the analysis of the total energy of the image at the output of the retina filter: this energy is higher for open features. The absolute level of energy associated to a specific state being different from a person to another and for different illumination conditions, the energy level associated to each state open or closed is adaptive and is updated each time a motion event (blinking or yawning) is detected. No constraint about motion is required. The system is working in real time and under all type of lighting conditions since the retina filtering is able to cope with illumination variations. This allows to estimate blinking and yawning frequencies which are clues of hypovigilance.

[1]  Patrice Delmas,et al.  Towards robust lip tracking , 2002, Object recognition supported by user interaction for service robots.

[2]  S. Ullman,et al.  A model for the temporal organization of X- and Y-type receptive fields in the primate retina , 2004, Biological Cybernetics.

[3]  Timothy F. Cootes,et al.  Statistical models of appearance for computer vision , 1999 .

[4]  Dmitry O. Gorodnichy,et al.  Towards Automatic Retrieval of Blink-Based Lexicon for Persons Suffered from Brain-Stem Injury using Video Cameras , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[5]  James L. Crowley,et al.  Coordination of perceptual processes for computer mediated communication , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[6]  Jeffrey F. Cohn,et al.  Robust Lip Tracking by Combining Shape, Color and Motion , 2007 .

[7]  David J. Fleet,et al.  Performance of optical flow techniques , 1994, International Journal of Computer Vision.

[8]  Mubarak Shah,et al.  Determining driver visual attention with one camera , 2003, IEEE Trans. Intell. Transp. Syst..

[9]  Gérard Bailly,et al.  Statistical active model for mouth components segmentation , 2005, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005..

[10]  F. Ashcroft,et al.  VIII. References , 1955 .

[11]  Alice Caplier,et al.  Jumping snakes and parametric model for lip segmentation , 2003, Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429).