SoundCraft: Enabling Spatial Interactions on Smartwatches using Hand Generated Acoustics

We present SoundCraft, a smartwatch prototype embedded with a microphone array, that localizes angularly, in azimuth and elevation, acoustic signatures: non-vocal acoustics that are produced using our hands. Acoustic signatures are common in our daily lives, such as when snapping or rubbing our fingers, tapping on objects or even when using an auxiliary object to generate the sound. We demonstrate that we can capture and leverage the spatial location of such naturally occurring acoustics using our prototype. We describe our algorithm, which we adopt from the MUltiple SIgnal Classification (MUSIC) technique [31], that enables robust localization and classification of the acoustics when the microphones are required to be placed at close proximity. SoundCraft enables a rich set of spatial interaction techniques, including quick access to smartwatch content, rapid command invocation, in-situ sketching, and also multi-user around device interaction. Via a series of user studies, we validate SoundCraft's localization and classification capabilities in non-noisy and noisy environments.

[1]  Robert Xiao,et al.  Toffee: enabling ad hoc, around-device interaction with acoustic time-of-arrival correlation , 2014, MobileHCI '14.

[2]  Mathieu Le Goc,et al.  A low-cost transparent electric field sensor for 3d interaction on mobile devices , 2014, CHI.

[3]  Gierad Laput,et al.  SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin , 2016, CHI.

[4]  Desney S. Tan,et al.  Enabling always-available input with muscle-computer interfaces , 2009, UIST '09.

[5]  Kent Lyons,et al.  The Gesture Watch: A Wireless Contact-free Gesture based Wrist Interface , 2007, 2007 11th IEEE International Symposium on Wearable Computers.

[6]  Keisuke Nakamura,et al.  Intelligent sound source localization for dynamic environments , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Hamed Ketabdar,et al.  MagiWrite: towards touchless digit entry using 3D space around mobile devices , 2010, Mobile HCI.

[8]  Chris Harrison,et al.  Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices , 2009, UIST '09.

[9]  Pourang Irani,et al.  SAMMI: A Spatially-Aware Multi-Mobile Interface for Analytic Map Navigation Tasks , 2015, MobileHCI.

[10]  Wei-Hung Chen,et al.  Blowatch: Blowable and Hands-free Interaction for Smartwatches , 2015, CHI Extended Abstracts.

[11]  Chris Harrison,et al.  Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces , 2008, UIST '08.

[12]  Pavel Slavík,et al.  Non-speech input and speech recognition for real-time control of computer games , 2006, Assets '06.

[13]  Jackson Feijó Filho,et al.  Advances on Breathing Based Text Input for Mobile Devices , 2015, HCI.

[14]  Michael Rohs,et al.  Hoverflow: exploring around-device interaction with IR distance sensors , 2009, Mobile HCI.

[15]  Joseph A. Paradiso,et al.  PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play , 1999, CHI '99.

[16]  Desney S. Tan,et al.  Skinput: appropriating the body as an input surface , 2010, CHI.

[17]  Li-Wei Chan,et al.  PalmGesture: Using Palms as Gesture Interfaces for Eyes-free Input , 2015, MobileHCI.

[18]  Otmar Hilliges,et al.  In-air gestures around unmodified mobile devices , 2014, UIST.

[19]  Gregory D. Abowd,et al.  SoundTrak , 2017, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[20]  Robert Xiao,et al.  Acoustic barcodes: passive, durable and inexpensive notched identification tags , 2012, UIST.

[21]  Tatsuya Kawahara,et al.  Optimized wavelet-domain filtering under noisy and reverberant conditions , 2015 .

[22]  Joseph A. Paradiso,et al.  Tracking and characterizing knocks atop large interactive displays , 2005 .

[23]  Johannes Schöning,et al.  WatchMe: A Novel Input Method Combining a Smartwatch and Bimanual Interaction , 2015, CHI Extended Abstracts.

[24]  Gregory D. Abowd,et al.  Whoosh: non-voice acoustics for low-cost, hands-free, and rapid input on smartwatches , 2016, SEMWEB.

[25]  Takeo Igarashi,et al.  Voice augmented manipulation: using paralinguistic information to manipulate mobile devices , 2013, MobileHCI '13.

[26]  Daniel Ashbrook Enabling mobile microinteractions , 2010 .

[27]  Gregory D. Abowd,et al.  Blui: low-cost localized blowable user interfaces , 2007, UIST '07.

[28]  Xiao Li,et al.  The vocal joystick:: evaluation of voice-based cursor control techniques , 2006, Assets '06.

[29]  Anind K. Dey,et al.  Serendipity: Finger Gesture Recognition using an Off-the-Shelf Smartwatch , 2016, CHI.

[30]  B.D. Van Veen,et al.  Beamforming: a versatile approach to spatial filtering , 1988, IEEE ASSP Magazine.

[31]  Hari Balakrishnan,et al.  6th ACM/IEEE International Conference on on Mobile Computing and Networking (ACM MOBICOM ’00) The Cricket Location-Support System , 2022 .

[32]  Jun Rekimoto,et al.  GestureWrist and GesturePad: unobtrusive wearable interaction devices , 2001, Proceedings Fifth International Symposium on Wearable Computers.

[33]  Patrick Olivier,et al.  Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor , 2012, UIST.

[34]  Yang Zhang,et al.  Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition , 2015, UIST.

[35]  Pedro Lopes,et al.  Augmenting touch interaction through acoustic sensing , 2011, ITS '11.

[36]  Ming Yang,et al.  A Novel Human-Computer Interface Based on Passive Acoustic Localisation , 2007, HCI.

[37]  R. O. Schmidt,et al.  Multiple emitter location and signal Parameter estimation , 1986 .

[38]  Gierad Laput,et al.  ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers , 2016, UIST.

[39]  Takeo Igarashi,et al.  Voice as sound: using non-verbal voice input for interactive control , 2001, UIST '01.

[40]  Loren G. Terveen,et al.  The sound of one hand: a wrist-mounted bio-acoustic fingertip gesture interface , 2002, CHI Extended Abstracts.

[41]  Suranga Nanayakkara,et al.  zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables , 2015, CHI.