Eliciting Mid-Air Gestures for Wall-Display Interaction

Freehand mid-air gestures are a promising input method for interacting with wall displays. However, work on mid-air gestures for wall-display interaction has mainly explored what is technically possible, which might not result in gestures that users would prefer. This paper presents a guessability study where 20 participants performed gestures for 25 actions on a three-meter wide display. Based on the resulting 1124 gestures, we describe user-defined mid-air gestures for wall-display interaction and characterize the types of gesture users prefer for this context. The resulting gestures were largely influenced by surface interaction; they tended to be larger and more physically-based than gestures elicited in previous studies using smaller displays.

[1]  Andy Cockburn,et al.  User-defined gestures for augmented reality , 2013, INTERACT.

[2]  Bongshin Lee,et al.  Reducing legacy bias in gesture elicitation studies , 2014, INTR.

[3]  Olivier Chapuis,et al.  Mid-air pan-and-zoom on wall-sized displays , 2011, CHI.

[4]  Daniel Vogel,et al.  Distant freehand pointing and clicking on very large, high resolution displays , 2005, UIST.

[5]  Veronica Teichrieb,et al.  An Open Catalog of Hand Gestures from Sci-Fi Movies , 2015, CHI Extended Abstracts.

[6]  Meredith Ringel Morris,et al.  Web on the wall: insights from a multimodal interaction elicitation study , 2012, ITS.

[7]  Meredith Ringel Morris,et al.  Understanding users' preferences for surface gestures , 2010, Graphics Interface.

[8]  Meredith Ringel Morris,et al.  User-defined gestures for surface computing , 2009, CHI.

[9]  Laurent Grisoni,et al.  Small, medium, or large?: estimating the user-perceived scale of stroke gestures , 2013, CHI.

[10]  Mikkel Rønne Jakobsen,et al.  An exploratory study of how abundant display space may support data analysis , 2012, NordiCHI.

[11]  Chris North,et al.  Move to improve: promoting physical navigation to increase user performance with large displays , 2007, CHI.

[12]  Radu-Daniel Vatavu,et al.  User-defined gestures for free-hand TV control , 2012, EuroITV.

[13]  Kasper Hornbæk,et al.  Vulture: a mid-air word-gesture keyboard , 2014, CHI.

[14]  M. Sheelagh T. Carpendale,et al.  A comparison of ray pointing techniques for very large displays , 2010, Graphics Interface.

[15]  Brad A. Myers,et al.  Maximizing the guessability of symbolic input , 2005, CHI Extended Abstracts.

[16]  Jörg Müller,et al.  StrikeAPose: revealing mid-air gestures on public displays , 2013, CHI.