Understanding the Design Space of Mouth Microgestures

As wearable devices move toward the face (i.e. smart earbuds, glasses), there is an increasing need to facilitate intuitive interactions with these devices. Current sensing techniques can already detect many mouth-based gestures; however, users’ preferences of these gestures are not fully understood. In this paper, we investigate the design space and usability of mouth-based microgestures. We first conducted brainstorming sessions (N=16) and compiled an extensive set of 86 user-defined gestures. Then, with an online survey (N=50), we assessed the physical and mental demand of our gesture set and identified a subset of 14 gestures that can be performed easily and naturally. Finally, we conducted a remote Wizard-of-Oz usability study (N=11) mapping gestures to various daily smartphone operations under a sitting and walking context. From these studies, we develop a taxonomy for mouth gestures, finalize a practical gesture set for common applications, and provide design guidelines for future mouth-based gesture interactions.

[1]  Hiroki Watanabe,et al.  Facial expression recognition using ear canal transfer function , 2019, UbiComp.

[2]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .

[3]  D. Scott McCrickard,et al.  Tongue-able interfaces: Prototyping and evaluating camera based tongue gesture input system , 2019, Smart Health.

[4]  Bodo Urban,et al.  EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions , 2017, CHI.

[5]  Russel Torah,et al.  A Smart Textile Based Facial EMG and EOG Computer Interface , 2014, IEEE Sensors Journal.

[6]  Meredith Ringel Morris,et al.  Understanding users' preferences for surface gestures , 2010, Graphics Interface.

[7]  M. Gentilucci,et al.  Grasp with hand and mouth: a kinematic study on healthy subjects. , 2001, Journal of neurophysiology.

[8]  Teddy Seyed,et al.  User Elicitation on Single-hand Microgestures , 2016, CHI.

[9]  Brett Kaufman,et al.  OsteoConduct: wireless body-area communication based on bone conduction , 2007, BODYNETS.

[10]  Paul Lukowicz,et al.  On the tip of my tongue: a non-invasive pressure-based tongue interface , 2014, AH.

[11]  C. McCulloch,et al.  Generalized Linear Mixed Models , 2005 .

[12]  Leilah Lyons,et al.  Framed Guessability: Improving the Discoverability of Gestures and Body Movements for Full-Body Interaction , 2018, CHI.

[13]  Chun Yu,et al.  Clench Interface: Novel Biting Input Techniques , 2019, CHI.

[14]  A. Jylhä SONIC GESTURES AS INPUT IN HUMAN-COMPUTER INTERACTION : TOWARDS A SYSTEMATIC APPROACH , 2011 .

[15]  Buntarou Shizuki,et al.  CanalSense: Face-Related Movement Recognition System based on Sensing Air Pressure in Ear Canals , 2017, UIST.

[16]  Jean Vanderdonckt,et al.  Gestures for Smart Rings: Empirical Results, Insights, and Design Implications , 2018, Conference on Designing Interactive Systems.

[17]  Kai Kunze,et al.  Understanding Face Gestures with a User-Centered Approach Using Personal Computer Applications as an Example , 2020, AHs.

[18]  Carlos Tejada,et al.  Bitey: an exploration of tooth click gestures for hands-free user interface control , 2016, MobileHCI.

[19]  Shumin Zhai,et al.  An isometric tongue pointing device , 1997, CHI.

[20]  M. Ghovanloo,et al.  The Tongue Enables Computer and Wheelchair Control for People with Spinal Cord Injury , 2013, Science Translational Medicine.

[21]  Brad A. Myers,et al.  Maximizing the guessability of symbolic input , 2005, CHI Extended Abstracts.

[22]  Per Ola Kristensson,et al.  Memorability of pre-designed and user-defined gesture sets , 2013, CHI.

[23]  Pattie Maes,et al.  Byte.it: Discreet Teeth Gestures for Mobile Device Interaction , 2019, CHI Extended Abstracts.

[24]  Thomas Paine,et al.  Large-Scale Visual Speech Recognition , 2018, INTERSPEECH.

[25]  Yuanchun Shi,et al.  FrownOnError: Interrupting Responses from Smart Speakers by Facial Expressions , 2020, CHI.

[26]  Shwetak N. Patel,et al.  Tongue-in-Cheek: Using Wireless Signals to Enable Non-Intrusive and Flexible Facial Gestures Detection , 2015, CHI.

[27]  Jason Wu,et al.  TongueBoard: An Oral Interface for Subtle Input , 2019, AH.

[28]  Koichi Kuzume,et al.  Evaluation of tooth-touch sound and expiration based mouse device for disabled persons , 2012, 2012 IEEE International Conference on Pervasive Computing and Communications Workshops.

[29]  Len Hamey,et al.  Designing a user-defined gesture vocabulary for an in-vehicle climate control system , 2016, OZCHI.

[30]  Daniel M. Oppenheimer,et al.  Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power , 2009 .

[31]  Paul N. Bennett,et al.  Pairwise ranking aggregation in a crowdsourced setting , 2013, WSDM.

[32]  Maysam Ghovanloo,et al.  The tongue and ear interface: a wearable system for silent speech recognition , 2014, SEMWEB.

[33]  Douglas Schuler,et al.  Participatory Design: Principles and Practices , 1993 .

[34]  Michael J. Lyons Facial gesture interfaces for expression and communication , 2004, 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583).

[35]  Kai Kunze,et al.  Make-a-face: a hands-free, non-intrusive device for tongue/mouth/cheek input using EMG , 2018, SIGGRAPH Posters.

[36]  Yuanchun Shi,et al.  Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands , 2018, UIST.

[37]  Meredith Ringel Morris,et al.  User-defined gestures for surface computing , 2009, CHI.

[38]  Gabriel Reyes,et al.  Buccal: low-cost cheek sensing for inferring continuous jaw motion in mobile virtual reality , 2018, UbiComp.

[39]  Yuanchun Shi,et al.  EarBuddy: Enabling On-Face Interaction via Wireless Earbuds , 2020, CHI.

[40]  Yuanchun Shi,et al.  HeadGesture , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[41]  Anh Nguyen,et al.  TYTH-Typing On Your Teeth: Tongue-Teeth Localization for Human-Computer Interface , 2018, MobiSys.

[42]  Takeo Igarashi,et al.  Exploring Subtle Foot Plantar-based Gestures with Sock-placed Pressure Sensors , 2015, CHI.

[43]  Lynne M Connelly,et al.  Fisher's Exact Test. , 2016, Medsurg nursing : official journal of the Academy of Medical-Surgical Nurses.

[44]  Ian Oakley,et al.  Designing Eyes-Free Interaction , 2007, HAID.

[45]  Suranga Nanayakkara,et al.  ChewIt. An Intraoral Interface for Discreet Interactions , 2019, CHI.

[46]  J. M. Gilbert,et al.  Silent speech interfaces , 2010, Speech Commun..

[47]  Jean Vanderdonckt,et al.  A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies? , 2020, Conference on Designing Interactive Systems.