Gaze-touch: combining gaze with multi-touch for interaction on the same surface

Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of 'gaze selects, touch manipulates'. Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement direct-touch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gaze-touch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch.

[1]  Andreas Paepcke,et al.  Improving the accuracy of gaze input for interaction , 2008, ETRA.

[2]  James J. Thomas,et al.  Visualizing the non-visual: spatial analysis and interaction with information from text documents , 1995, Proceedings of Visualization 1995 Conference.

[3]  John T. Stasko,et al.  Jigsaw: Supporting Investigative Analysis through Interactive Visualization , 2007, 2007 IEEE Symposium on Visual Analytics Science and Technology.

[4]  Hongbin Zha,et al.  Improving eye cursor's stability for eye pointing tasks , 2008, CHI.

[5]  Raimund Dachselt,et al.  Look & touch: gaze-supported target acquisition , 2012, CHI.

[6]  Mark T. Maybury,et al.  Intelligent multimedia interfaces , 1994, CHI Conference Companion.

[7]  Robert J. K. Jacob,et al.  Eye Movement-Based Human-Computer Interaction Techniques: Toward Non-Command Interfaces , 2003 .

[8]  Anastasia Bezerianos,et al.  The vacuum: facilitating the manipulation of distant objects , 2005, CHI.

[9]  John Paulin Hansen,et al.  Gaze beats mouse: hands-free selection by combining gaze and emg , 2008, CHI Extended Abstracts.

[10]  John Paulin Hansen,et al.  Gaze typing compared with input by head and hand , 2004, ETRA.

[11]  Shumin Zhai,et al.  Manual and gaze input cascaded (MAGIC) pointing , 1999, CHI '99.

[12]  Raimund Dachselt,et al.  Still looking: investigating seamless gaze-supported selection, positioning, and manipulation of distant targets , 2013, CHI.

[13]  Roel Vertegaal,et al.  Pointable: an in-air pointing technique to manipulate out-of-reach targets on tabletops , 2011, ITS '11.

[14]  Dominik Schmidt,et al.  Eye Pull, Eye Push: Moving Objects between Large Screens and Personal Devices with Gaze and Touch , 2013, INTERACT.

[15]  Daniel J. Wigdor,et al.  Rock & rails: extending multi-touch interactions with shape gestures to enable precise spatial manipulations , 2011, CHI.

[16]  Robert J. K. Jacob,et al.  Evaluation of eye gaze interaction , 2000, CHI.

[17]  Leena Arhippainen,et al.  Gaze tracking and non-touch gesture based interaction method for mobile 3D virtual spaces , 2012, OZCHI.

[18]  Patrick Baudisch,et al.  Precise selection techniques for multi-touch screens , 2006, CHI.

[19]  Patrick Baudisch,et al.  The generalized perceived input point model and how to double touch accuracy by extracting fingerprints , 2010, CHI.

[20]  Shumin Zhai,et al.  High precision touch screen interaction , 2003, CHI '03.

[21]  Hans-Werner Gellersen,et al.  Cross-device gaze-supported point-to-point content transfer , 2014, ETRA.

[22]  Per Ola Kristensson,et al.  Multi-touch rotation gestures: performance and ergonomics , 2013, CHI.

[23]  Robert J. K. Jacob,et al.  What you look at is what you get: eye movement-based interaction techniques , 1990, CHI '90.

[24]  Per Ola Kristensson,et al.  Multi-touch pinch gestures: performance and ergonomics , 2013, ITS.

[25]  Raimund Dachselt,et al.  Investigating gaze-supported multimodal pan and zoom , 2012, ETRA '12.

[26]  Ji-Hyung Park,et al.  I-Grabber: expanding physical reach in a large-display tabletop environment through the use of a virtual grabber , 2009, ITS '09.