EXTENDING THE LIMITS FOR GAZE POINTING THROUGH THE USE OF SPEECH

Eye trackers have been used as pointing devices for a number of years. Due to the inherent limitations in the accuracy of eye gaze, however, interaction has been limited to targets that are at least one degree of visual angle in size. Consequently, targets in today’s gaze-based interfaces have sizes and layouts quite distant from what is perceived as “natural settings”. To cope with the accuracy constraints, we developed a multimodal pointing technique combining eye gaze and speech inputs. The technique was tested in a user study on pointing at multiple targets. Results suggest that pointing accuracy is 93% for targets subtending 0.85 degrees and 0.3-degree gaps between them. User perfor-mance is thus shown to approach the limit of practical pointing. Effectively, developing a user interface that supports the hands-free style of interaction and has a design similar to that of today’s common interfaces seems a feasible task.