ProbUI: Generalising Touch Target Representations to Enable Declarative Gesture Definition for Probabilistic GUIs

We present ProbUI, a mobile touch GUI framework that merges ease of use of declarative gesture definition with the benefits of probabilistic reasoning. It helps developers to handle uncertain input and implement feedback and GUI adaptations. ProbUI replaces today's static target models (bounding boxes) with probabilistic gestures ("bounding behaviours"). It is the first touch GUI framework to unite concepts from three areas of related work: 1) Developers declaratively define touch behaviours for GUI targets. As a key insight, the declarations imply simple probabilistic models (HMMs with 2D Gaussian emissions). 2) ProbUI derives these models automatically to evaluate users' touch sequences. 3) It then infers intended behaviour and target. Developers bind callbacks to gesture progress, completion, and other conditions. We show ProbUI's value by implementing existing and novel widgets, and report developer feedback from a survey and a lab study.

[1]  Tyler Baldwin,et al.  Towards online adaptation and personalization of key-target resizing for mobile devices , 2012, IUI '12.

[2]  Beat Signer,et al.  Midas: a declarative multi-touch interaction framework , 2010, TEI.

[3]  Marjorie Skubic,et al.  Hidden Markov Model Symbol Recognition for Sketch-Based Interfaces , 2004, AAAI Technical Report.

[4]  Scott E. Hudson,et al.  Probabilistic state machines: dialog management for inputs with uncertainty , 1992, UIST '92.

[5]  Shumin Zhai,et al.  FFitts law: modeling finger touch with fitts' law , 2013, CHI.

[6]  Mark W. Newman,et al.  Escape: a target selection technique using visually-cued gestures , 2008, CHI.

[7]  Shumin Zhai,et al.  Speed-accuracy tradeoff in Fitts' law tasks-on the equivalency of actual and nominal pointing precision , 2004, Int. J. Hum. Comput. Stud..

[8]  Gregory D. Abowd,et al.  Interaction techniques for ambiguity resolution in recognition-based interfaces , 2007, SIGGRAPH '07.

[9]  Caroline Appert,et al.  Extending the vocabulary of touch events with ThumbRock , 2013, Graphics Interface.

[10]  Tovi Grossman,et al.  A probabilistic approach to modeling two-dimensional pointing , 2005, TCHI.

[11]  Daniel Buschek,et al.  Sparse selection of training data for touch correction systems , 2013, MobileHCI '13.

[12]  Volker Roth,et al.  Bezel swipe: conflict-free scrolling and multiple selection on mobile touch screen devices , 2009, CHI.

[13]  Jonathan Sprinkle,et al.  Proceedings of the 10th Workshop on Domain-Specific Modeling (DSM'10) , 2010 .

[14]  Shumin Zhai,et al.  Making touchscreen keyboards adaptive to keys, hand postures, and individuals: a hierarchical spatial backoff model approach , 2013, CHI.

[15]  Joanna Bergstrom-Lehtovirta,et al.  Modeling the functional area of the thumb on mobile touchscreen surfaces , 2014, CHI.

[16]  Xiang 'Anthony' Chen,et al.  The fat thumb: using the thumb's contact size for single-handed mobile interaction , 2012, Mobile HCI.

[17]  Atau Tanaka,et al.  Adaptive Gesture Recognition with Variation Estimation for Interactive Systems , 2014, ACM Trans. Interact. Intell. Syst..

[18]  Yang Li,et al.  Gesture studio: authoring multi-touch interactions through demonstration and declaration , 2013, CHI.

[19]  Norbert Schnell,et al.  Continuous Realtime Gesture Following and Recognition , 2009, Gesture Workshop.

[20]  Daniel Buschek,et al.  User-specific touch models in a cross-device context , 2013, MobileHCI '13.

[21]  Stephen A. Brewster,et al.  The effects of walking speed on target acquisition on a touchscreen interface , 2011, Mobile HCI.

[22]  Yang Li,et al.  Gesture coder: a tool for programming multi-touch gestures by demonstration , 2012, CHI.

[23]  K. Hinckley Input technologies and techniques , 2002 .

[24]  Scott E. Hudson,et al.  Monte carlo methods for managing interactive state, action and feedback under uncertainty , 2011, UIST '11.

[25]  Hyun W. Ka,et al.  Circling interface: An alternative interaction method for on-screen object manipulation , 2013 .

[26]  Jacob O. Wobbrock,et al.  Exploring the design of accessible goal crossing desktop widgets , 2009, CHI Extended Abstracts.

[27]  Scott E. Hudson,et al.  Concepts, Values, and Methods for Technical Human-Computer Interaction Research , 2014, Ways of Knowing in HCI.

[28]  David Barber,et al.  Bayesian reasoning and machine learning , 2012 .

[29]  Tony DeRose,et al.  Proton: multitouch gestures as regular expressions , 2012, CHI.

[30]  Geehyuk Lee,et al.  Expanding touch input vocabulary by using consecutive distant taps , 2014, CHI.

[31]  Gregory D. Abowd,et al.  Distributed Mediation of Imperfectly Sensed Context in Aware Environments , 2000 .

[32]  Charles Perin,et al.  Crossets: manipulating multiple sliders by crossing , 2015, Graphics Interface.

[33]  Feng Wang,et al.  Empirical evaluation for finger input properties in multi-touch interaction , 2009, CHI.

[34]  B. Shneiderman,et al.  Improving the accuracy of touch screens: an experimental evaluation of three strategies , 1988, CHI '88.

[35]  Shumin Zhai,et al.  Foundations for designing and evaluating user interfaces based on the crossing paradigm , 2010, TCHI.

[36]  Joshua Goodman,et al.  Language modeling for soft keyboards , 2002, IUI '02.

[37]  Eric Lecolinet,et al.  MicroRolls: expanding touch-screen input vocabulary by distinguishing rolls vs. slides of the thumb , 2009, CHI.

[38]  Stephen A. Brewster,et al.  Investigating the effects of encumbrance on one- and two- handed interactions with mobile devices , 2014, CHI.

[39]  Frank Maurer,et al.  A domain specific language to define gestures for multi-touch applications , 2010, DSM '10.

[40]  Patrick Baudisch,et al.  Understanding touch , 2011, CHI.

[41]  Steven K. Feiner,et al.  Rubbing and tapping for precise and rapid selection on touch-screen displays , 2008, CHI.

[42]  Tomer Moscovich,et al.  Contact area interaction with sliding widgets , 2009, UIST '09.

[43]  Tony DeRose,et al.  Proton++: a customizable declarative multitouch framework , 2012, UIST.

[44]  Patrick Baudisch,et al.  The generalized perceived input point model and how to double touch accuracy by extracting fingerprints , 2010, CHI.

[45]  Jennifer Mankoff,et al.  Providing integrated toolkit-level support for ambiguity in recognition-based interfaces , 2000, CHI Extended Abstracts.

[46]  Shumin Zhai,et al.  Bayesian touch: a statistical criterion of target selection with finger touch , 2013, UIST.

[47]  Scott E. Hudson,et al.  An Architecture for Generating Interactive Feedback in Probabilistic User Interfaces , 2015, CHI.

[48]  Markus Löchtefeld,et al.  A user-specific machine learning approach for improving touch accuracy on mobile devices , 2012, UIST '12.

[49]  Niels Henze,et al.  100,000,000 taps: analysis and improvement of touch performance in the large , 2011, Mobile HCI.

[50]  Julia Schwarz,et al.  Towards a unified framework for modeling, dispatching, and interpreting uncertain input , 2010, UIST '10.

[51]  John Williamson,et al.  Continuous uncertain interaction , 2006 .

[52]  Scott E. Hudson,et al.  A framework for robust and flexible handling of inputs with uncertainty , 2010, UIST.

[53]  Yang Li,et al.  Optimistic Programming of Touch Interaction , 2014, ACM Trans. Comput. Hum. Interact..

[54]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[55]  Roderick Murray-Smith,et al.  Focused and casual interactions: allowing users to vary their level of engagement , 2013, CHI.

[56]  Tim Paek,et al.  Usability guided key-target resizing for soft keyboards , 2010, IUI '10.