Learning visual coordinate transformations with competition

As the angle of gaze changes, so does the retinal location of the visual image of a stationary object. Since the object is correctly perceived as stationary, the retinopic coordinates of the object have been transformed into craniotopic coordinates somehow using eye position information. Neurons in area 7a of posterior parietal cortex in macaque monkeys are thought to contribute to this transformation. The author describes a model of area 7a that incorporates a topographic map. They trained networks using competitive backpropagation learning to learn the transformation task and develop this topographic map. The trained networks generalized well to previously unseen patterns. The study showed that a competitive backpropagation learning rule can train networks employing competitive activation mechanisms to learn continuous valued functions and that it is possible at least computationally to construct a topographic map in area 7a which might be used for eye position independent spatial coding.<<ETX>>