Integrating discrete events and continuous head movements for video-based interaction techniques

Human head gestures can potentially trigger different commands from the list of available options in graphical user interfaces or in virtual and smart environments. However, continuous tracking techniques are limited in generating discrete events which could be used to execute a predefined set of commands. In this article, we discuss a possibility to encode a set of discrete events by integrating continuous head movements and crossing-based interaction paradigm. A set of commands can be encoded through specific sequences of crossing points when a head-mouse cursor such as a scaled pointer interacts with a graphical object. The goal of the present experiment was testing the perceptual-motor performance of novices in target acquisition tasks using a subset of round head gestures and symbolic icons designating eight types of directional head movements. We have demonstrated that the novices can equally well execute round head gestures in clockwise and counter-clockwise directions by making two crossings for about 2 s or three crossings for about 3 s. None of the participants reported neck strain or other problems after 360 trials performed during a 40-min test in each of 5 days.

[1]  Roope Raisamo,et al.  Video as Input: Spiral Search with the Sparse Angular Sampling , 2006, ISCIS.

[2]  Mario Mühlehner Customizing User Interfaces with Input Profiles , 2006, ICCHP.

[3]  Maria Karam,et al.  A framework for research and design of gesture-based human-computer interactions , 2006 .

[4]  Jacob O. Wobbrock,et al.  Longitudinal evaluation of discrete consecutive gaze gestures for text entry , 2008, ETRA.

[5]  Albrecht Schmidt,et al.  Interacting with the Computer Using Gaze Gestures , 2007, INTERACT.

[6]  Dan Venolia,et al.  T-Cube: a fast, self-disclosing pen-based alphabet , 1994, CHI '94.

[7]  I. S. Mackenzie,et al.  Virtual Environments and Advanced Interface Design , 1995 .

[8]  Carl D. Worth xstroke: Full-screen Gesture Recognition for X , 2003, USENIX Annual Technical Conference, FREENIX Track.

[9]  Renaud Blanch,et al.  Semantic pointing: improving target acquisition with control-display ratio adaptation , 2004, CHI.

[10]  Yoshimichi Yonezawa,et al.  New mouse-function using teeth-chattering and potential around eyes for the physically challenged , 1996 .

[11]  Poika Isokoski Performance of menu-augmented soft keyboards , 2004, CHI '04.

[12]  Alan F. Blackwell,et al.  Dasher—a data entry interface using continuous gestures and language models , 2000, UIST '00.

[13]  Jacob O. Wobbrock,et al.  Exploring the design of accessible goal crossing desktop widgets , 2009, CHI Extended Abstracts.

[14]  Oleg Spakov,et al.  On-line adjustment of dwell time for target selection by gaze , 2004, NordiCHI '04.

[15]  Rick Kjeldsen,et al.  Head gestures for computer control , 2001, Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems.

[16]  Gregory D. Abowd,et al.  Cirrin: a word-level unistroke keyboard for pen input , 1998, UIST '98.

[17]  Grigori E. Evreinov,et al.  Adaptive blind interaction technique for touchscreens , 2006, Universal Access in the Information Society.

[18]  Steven K. Feiner,et al.  An interaction system for watch computers using tactile guidance and bidirectional segmented strokes , 2004, Eighth International Symposium on Wearable Computers.

[19]  Patrick Baudisch,et al.  Curve dial: eyes-free parameter entry for GUIs , 2005, CHI Extended Abstracts.

[20]  Shumin Zhai,et al.  Shorthand writing on stylus keyboard , 2003, CHI '03.

[21]  Dmitry O. Gorodnichy,et al.  Nouse 'use your nose as a mouse' perceptual vision technology for hands-free games and interfaces , 2004, Image Vis. Comput..

[22]  Robert J. K. Jacob,et al.  Eye tracking in advanced interface design , 1995 .

[23]  François Guimbretière,et al.  Techniques , 2011, Laboratory Investigation.

[24]  John Paulin Hansen,et al.  Command Without a Click: Dwell Time Typing by Mouse and Gaze Selections , 2003, INTERACT.

[25]  Anke Huckauf,et al.  Gazing with pEYEs: towards a universal input for various applications , 2008, ETRA.

[26]  Krzysztof Z. Gajos,et al.  A comparison of area pointing and goal crossing for people with and without motor impairments , 2007, Assets '07.

[27]  Marco Porta,et al.  Eye-S: a full-screen input modality for pure eye-based communication , 2008, ETRA.

[28]  Patrick Baudisch,et al.  Design and analysis of delimiters for selection-action pen gesture phrases in scriboli , 2005, CHI.

[29]  Nicholas Chen,et al.  Phrasing techniques for multi-stroke selection gestures , 2006, Graphics Interface.

[30]  Hikaru Inooka,et al.  FREQUENCY DOMAIN IDENTIFICATION OF THE HEAD-NECK COMPLEX , 2002 .

[31]  Slavko Milekic The More You Look the More You Get: Intention-Based Interface Using Gaze-Tracking. , 2003 .

[32]  Shumin Zhai,et al.  More than dotting the i's --- foundations for crossing-based interfaces , 2002, CHI.