An adaptive neural model for mapping invariant target position.
暂无分享,去创建一个
We perceive a constant target in space as constant even though the registration of that target on our senses is continuously shifting. This article derives and stimulates a neural network model that represents visual spot targets, invariant with respect to any combination of egocentric target measures. The model represents space in terms of signals used to move in that space. The model learns and maintains precise sensory-motor calibrations starting with only loosely defined relations. It is adaptive to physical changes of the eye and muscles as well as internal system parameters. Its performance is noise and fault tolerant. Computer simulations show that the average error in target orientation after learning is about 1% of the total visual field extent. The model maintains good accuracy with many different parameter choices. Its performance is most related to the function of the posterior parietal cortex. Testable predictions are made for the columnar topography and learning in that brain structure.