Drag and drop the apple: the semantic weight of words and images in touch-based interaction

In this paper we report a user study to investigate the effect of semantic weight in a touch-based drag and drop task. The study was motivated by our own interest in exploring potential factors that influence touch behavior and supported by results in related neuroscience research. The question we intended to answer is: "Do people drag the representation of a smaller and lighter real world object (e.g. an apple) different than the representation of a heavier and larger real world object (e.g. a car)?". Participants were asked to perform a drag and drop task repeatedly on a tablet device. Dragged objects were the same physical size on screen, but represented real world objects that were either heavy and large or light and small. We studied two representation modalities (i.e. image and text). In both representation modalities, semantically heavier objects were dragged significantly faster than semantically lighter objects.

[1]  D. Rosenbaum,et al.  Grasping the meaning of words , 2003, Experimental Brain Research.

[2]  K. H. Kim,et al.  Emotion recognition system using short-term monitoring of physiological signals , 2004, Medical and Biological Engineering and Computing.

[3]  Benjamin B. Bederson,et al.  ThumbSpace: Generalized One-Handed Input for Touchscreen-Based Mobile Devices , 2007, INTERACT.

[4]  Scott E. Hudson,et al.  Automatically detecting pointing performance , 2008, IUI '08.

[5]  P. Fitts The information capacity of the human motor system in controlling the amplitude of movement. , 1954, Journal of experimental psychology.

[6]  M. Goodale,et al.  Sight Unseen: An Exploration of Conscious and Unconscious Vision , 2004 .

[7]  Jeff Rose,et al.  Rotating virtual objects with real handles , 1999, TCHI.

[8]  Patrick Baudisch,et al.  Understanding touch , 2011, CHI.

[9]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Niels Henze,et al.  100,000,000 taps: analysis and improvement of touch performance in the large , 2011, Mobile HCI.

[11]  Lionel Brunel,et al.  Visual memory and visual perception: when memory improves visual search , 2011, Memory & cognition.

[12]  Michael Rohs,et al.  Characteristics of pressure-based input for mobile devices , 2010, CHI.

[13]  Patrick Baudisch,et al.  Lucid touch: a see-through mobile device , 2007, UIST.

[14]  P. Fitts The information capacity of the human motor system in controlling the amplitude of movement. 1954. , 1992, Journal of experimental psychology. General.

[15]  M. Goodale,et al.  The visual brain in action , 1995 .

[16]  Fang Chen,et al.  Using pen input features as indices of cognitive load , 2007, ICMI '07.

[17]  V. Michael Bove,et al.  Graspables: grasp-recognition as a user interface , 2009, CHI.

[18]  Michael Rohs,et al.  HoverFlow: expanding the design space of around-device interaction , 2009, Mobile HCI.

[19]  M. Gentilucci,et al.  Language and motor control , 2000, Experimental Brain Research.

[20]  M. Goodale,et al.  Two visual systems re-viewed , 2008, Neuropsychologia.

[21]  Katharina Reinecke,et al.  Accurate measurements of pointing performance from in situ observations , 2012, CHI.

[22]  Patrick Baudisch,et al.  Back-of-device interaction allows creating very small touch devices , 2009, CHI.

[23]  Helena M. Mentis,et al.  Using TouchPad pressure to detect negative affect , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[24]  I. Scott MacKenzie,et al.  Extending Fitts' law to two-dimensional tasks , 1992, CHI.

[25]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[26]  Eva Hornecker,et al.  Beyond affordance: tangibles' hybrid nature , 2012, TEI.

[27]  Katrin Wolf,et al.  PinchPad: performance of touch-based gestures while grasping devices , 2012, TEI.