Human-computer interaction based on eye movement tracking

An advanced approach to man-machine interaction is proposed, in which computer vision techniques are used for interpreting user actions. The key idea of the approach is the combined use of head motions for visual navigation and eye pupil positions for context switching within the graphical human-computer interface. This allows a partial decoupling of the visual models used for tracing eye features, with beneficial effects on both computational speed and adaptation to user characteristics. The applications range from navigation and selection in virtual reality and multimedia systems, to aids for the disabled and the monitoring of typical user actions in front of advanced terminals. The feasibility of the approach is tested and discussed in the case of a virtual reality application, the virtual museum.

[1]  Robert J. K. Jacob,et al.  What you look at is what you get: eye movement-based interaction techniques , 1990, CHI '90.

[2]  Paolo Dario,et al.  Prototype of a vision-based gaze-driven man-machine interface , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[3]  Roberto Brunelli,et al.  Face Recognition: Features Versus Templates , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Roberto Cipolla,et al.  Extracting the Affine Transformation from Texture Moments , 1994, ECCV.

[5]  Alex Pentland,et al.  Visually Controlled Graphics , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Alan L. Yuille,et al.  Deformable templates , 1993 .

[7]  Peter W. Hallinan Recognizing human eyes , 1991, Optics & Photonics.

[8]  Roberto Cipolla,et al.  Determining the gaze of faces in images , 1994, Image Vis. Comput..