A head-centered representation of 3D target location derived from opponent eye position commands

A neural network model of how the brain forms spatial representations with which to control sensory-guided and memory-guided eye and limb movements is introduced. These spatial representations are expressed in both head-centered coordinates and body-centered coordinates because the eyes move within the head, whereas the head, arms and legs move with respect to the body. A key process in the formation of spatial representations whereby humans and other mammals can skilfully act upon objects in 3D space despite the variable relative location of sensing and acting segments is analyzed. The resulting spatial representations are built up from the same types of computations that are used to control motor commands. The natural form of neural computations that are appropriate for representation and control of a bilaterally symmetric body is studied. Bilateral symmetry leads to the use of competitive and cooperative interactions among bilaterally symmetric body segments. These include opponent interactions between pairs of antagonistic neurons that measure spatial or motor offset with respect to an axis of symmetry.<<ETX>>