zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables

In this paper we present zSense, which provides greater input expressivity for spatially limited devices such as smart wearables through a shallow depth gesture recognition system using non-focused infrared sensors. To achieve this, we introduce a novel Non-linear Spatial Sampling (NSS) technique that significantly cuts down the number of required infrared sensors and emitters. These can be arranged in many different configurations; for example, number of sensor emitter units can be as minimal as one sensor and two emitters. We implemented different configurations of zSense on smart wearables such as smartwatches, smartglasses and smart rings. These configurations naturally fit into the flat or curved surfaces of such devices, providing a wide scope of zSense enabled application scenarios. Our evaluations reported over 94.8% gesture recognition accuracy across all configurations.

[1]  Michael Rohs,et al.  ShoeSense: a new perspective on gestural interaction and wearable applications , 2012, CHI.

[2]  Patrick Baudisch,et al.  Imaginary interfaces: spatial interaction with empty hands and without visual feedback , 2010, UIST.

[3]  Wen-Huang Cheng,et al.  FingerPad: private and subtle interaction using fingertips , 2013, UIST.

[4]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[5]  Ting Sun,et al.  Single-pixel imaging via compressive sampling , 2008, IEEE Signal Process. Mag..

[6]  Yvonne Rogers,et al.  Fat Finger Worries: How Older and Younger Users Physically Interact with PDAs , 2005, INTERACT.

[7]  Chris Harrison,et al.  TapSense: enhancing finger interaction on touch surfaces , 2011, UIST.

[8]  Chris Harrison,et al.  Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices , 2009, UIST '09.

[9]  Sean White,et al.  uTrack: 3D input using two magnetic sensors , 2013, UIST.

[10]  R.G. Baraniuk,et al.  Compressive Sensing [Lecture Notes] , 2007, IEEE Signal Processing Magazine.

[11]  Yang Li,et al.  User-defined motion gestures for mobile interaction , 2011, CHI.

[12]  Bharti Bansal,et al.  Gesture Recognition: A Survey , 2016 .

[13]  Sungjae Hwang,et al.  MagGetz: customizable passive tangible controllers on and around conventional mobile devices , 2013, UIST.

[14]  Gierad Laput,et al.  Expanding the input expressivity of smartwatches with mechanical pan, twist, tilt and click , 2014, CHI.

[15]  Dirk Werner,et al.  What's Happening in the Mathematical Sciences? , 2001 .

[16]  Kent Lyons,et al.  The Gesture Watch: A Wireless Contact-free Gesture based Wrist Interface , 2007, 2007 11th IEEE International Symposium on Wearable Computers.

[17]  Sean White,et al.  Nenya: subtle and eyes-free mobile input with a magnetically-tracked finger ring , 2011, CHI.

[18]  Chris Harrison,et al.  OmniTouch: wearable multitouch interaction everywhere , 2011, UIST.

[19]  Suranga Nanayakkara,et al.  EyeRing: a finger-worn input device for seamless interactions with our surroundings , 2013, AH.

[20]  Suranga Nanayakkara,et al.  AugmentedForearm: exploring the design space of a display-enhanced forearm , 2013, AH.

[21]  H WittenIan,et al.  The WEKA data mining software , 2009 .

[22]  Desney S. Tan,et al.  Skinput: appropriating the body as an input surface , 2010, CHI.

[23]  D. Mackenzie,et al.  Compressed sensing makes every pixel count , 2009 .

[24]  Meredith Ringel Morris,et al.  User-defined gestures for surface computing , 2009, CHI.

[25]  Shahram Izadi,et al.  SideSight: multi-"touch" interaction around small devices , 2008, UIST '08.

[26]  David A. Forsyth,et al.  Around device interaction for multiscale navigation , 2012, Mobile HCI.

[27]  Takashi Maeno,et al.  Touch interface on back of the hand , 2011, SIGGRAPH '11.

[28]  Massimo Fornasier,et al.  Compressive Sensing , 2015, Handbook of Mathematical Methods in Imaging.

[29]  R. Bencina,et al.  Improved Topological Fiducial Tracking in the reacTIVision System , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops.

[30]  Vivek K. Goyal,et al.  Mime: compact, low power 3D gesture sensing for interaction with head mounted displays , 2013, UIST.

[31]  Jun Rekimoto,et al.  Augmented surfaces: a spatially continuous work space for hybrid computing environments , 1999, CHI '99.

[32]  Patrick Olivier,et al.  Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor , 2012, UIST.

[33]  Hamed Ketabdar,et al.  Towards using embedded magnetic field sensor for around mobile device 3D interaction , 2010, Mobile HCI.