Placing and Recalling Virtual Items on the Skin

The human skin provides an ample, always-on surface for input to smart watches, mobile phones, and remote displays. Using touch on bare skin to issue commands, however, requires users to recall the location of items without direct visual feedback. We present an in-depth study in which participants placed 30 items on the hand and forearm and attempted to recall their locations. We found that participants used a variety of landmarks, personal associations, and semantic groupings in placing the items on the skin. Although participants most frequently used anatomical landmarks (e.g., fingers, joints, and nails), recall rates were higher for items placed on personal landmarks, including scars and tattoos. We further found that personal associations between items improved recall, and that participants often grouped important items in similar areas, such as family members on the nails. We conclude by discussing the implications of our findings for design of skin-based interfaces.

[1]  Bing-Yu Chen,et al.  Pub - point upon body: exploring eyes-free interaction and methods on an arm , 2011, UIST.

[2]  J W Garrett,et al.  The Adult Human Hand: Some Anthropometric and Biomechanical Considerations , 1971, Human factors.

[3]  Jürgen Steimle,et al.  More than touch: understanding how people use skin as an input surface for mobile computing , 2014, CHI.

[4]  Seung-Chan Kim,et al.  Expansion of Smartwatch Touch Interface from Touchscreen to Around Device Interface Using Infrared Line Image Sensors , 2015, Sensors.

[5]  Uran Oh,et al.  Design of and subjective response to on-body input for people with visual impairments , 2014, ASSETS.

[6]  David Coyle,et al.  Extending interaction for smart watches: enabling bimanual around device control , 2014, CHI Extended Abstracts.

[7]  Niloofar Dezfuli,et al.  PalmRC: leveraging the palm surface as an imaginary eyes-free television remote control , 2014, Behav. Inf. Technol..

[8]  Mike Y. Chen,et al.  PalmType: Using Palms as Keyboards for Smart Glasses , 2015, MobileHCI.

[9]  Uran Oh,et al.  A Performance Comparison of On-Hand versus On-Phone Nonvisual Input by Blind and Sighted Users , 2015, ACM Trans. Access. Comput..

[10]  Desney S. Tan,et al.  Skinput: appropriating the body as an input surface , 2010, CHI.

[11]  Klaus H. Hinrichs,et al.  DigiTap: an eyes-free VR/AR symbolic input device , 2014, VRST '14.

[12]  Patrick Baudisch,et al.  Imaginary interfaces: spatial interaction with empty hands and without visual feedback , 2010, UIST.

[13]  Patrick Baudisch,et al.  Understanding palm-based imaginary interfaces: the role of visual and tactile cues when browsing , 2013, CHI.

[14]  Chris Harrison,et al.  OmniTouch: wearable multitouch interaction everywhere , 2011, UIST.

[15]  Adiyan Mujibiya,et al.  The sound of touch: on-body touch and gesture sensing based on transdermal ultrasound propagation , 2013, ITS.

[16]  Li-Wei Chan,et al.  CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring , 2015, UIST.

[17]  Madeline Gannon,et al.  Tactum: A Skin-Centric Approach to Digital Design and Fabrication , 2015, CHI.

[18]  Shengdong Zhao,et al.  Physical Loci: Leveraging Spatial, Object and Semantic Memory for Command Selection , 2015, CHI.