This paper presents a computational theory for generating the complicated arm movements needed for tasks such as reaching while avoiding obstacles, or scratching an itch on one arm with the other hand. The required movements are computed using many control units with virtual locations over the entire surface of the arm and hand. These units, called brytes, are like little brains, each with its own input and output and its own idea about how its virtual location should move. The paper explains how a previously developed gradient method for dealing with ill-posed multi-joint movements [1] can be applied to large numbers of spatially distributed controllers. Simulations illustrate when the arm movements are successful and when and why they fail. Many of these failures can be avoided by a simple method that adds intermediate reaching goals. The theory is consistent with a number of existing experimental observations.
[1]
E. Todorov.
Optimality principles in sensorimotor control
,
2004,
Nature Neuroscience.
[2]
David Zipser,et al.
Reaching to grasp with a multi-jointed arm. I. Computational model.
,
2002,
Journal of neurophysiology.
[3]
Emiliano Brunamonti,et al.
Reaching in Depth: Hand Position Dominates over Binocular Eye Position in the Rostral Superior Parietal Lobule
,
2009,
The Journal of Neuroscience.
[4]
C. Gross,et al.
Coding of visual space by premotor neurons.
,
1994,
Science.
[5]
E. Marder.
Foundations for the future.
,
2002,
Journal of neurophysiology.