Physical versus virtual pointing

B u r n a b y , B C C a n a d a V 5 A 1S6 (604) 2 9 1-3 0 0 4 c h r i s t i n e _ m a c k e n z i e @ sfu.ca A B S T R A C T An experiment was conducted to investigate differences in performance between virtual pointing, where a 2-D computer image representing the hand and targets was superimposed on the workspace, and physical pointing with vision of the hand and targets painted on the work surface. A detailed examination of movement kinematics revealed no differences in the initial phase of the movement, but that the final phase of homing in on smaller targets was more difficult in the virtual condition. These differences are summarised by a two-part model of movement time which also captures the effects of scaling distances to, and sizes of, targets. The implications of this model for design, analysis, and classification of pointing devices and positioning tasks are discussed. Pointing to a location on a graphics display is an elemental gesture in many forms of human-computer interaction (HCi). Pointing movements have been studied in an attempt to understand perceptual-motor processes when we interact with real objects in the physical world. Our interest is in relating these theories and models from motor control to human performance in more abstract environments, where objects and actions represented on a graphics display are mediated by pointing devices. In particular, we wonder how limitations of current 2-D and 3-D virtual environments affect planning and control of natural movements like aiming, pointing, reaching, Permission to make digital/hard copies of all or part of this material for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication and its date appear, and notice is grasping, and manipulating objects; and how detailed analyses of movement kinematics can be used to reveal systematic effects of these constraints on human performance, in the HCI context. Woodworth [12] first proposed that human pointing movements can been understood in terms of two movement phases: an initial planned impulse which covers most of the distance, followed by a second phase of deceleration to the target under current control. According to Fitts [3], total movement time involves a tradeoff …