Computation on the Transient
暂无分享,去创建一个
Line attractor networks have become standard workhorses of computational accounts of neural population processing for optimal perceptual inference, working memory, decision making and more. Such networks are defined by possessing a one(line) or multi-dimensional (surface) manifold in the high dimensional space of the activities of all the neurons in the net, to a point on which the state of the network is projected in a non-linear manner by the network’s dynamics. The standard view that the network represents information by the location of the point on this manifold at which it sits [1] is only appropriate if the computation to be performed by the network is aligned with the underlying symmetry implied by the manifold. In interesting cases, the computation that must be performed is orthogonal to this symmetry structure, and so an alternative computational view is required. Here, we illustrate the problem using a well-studied visual hyperacuity task, and suggest solutions involving different classes of computations during the network’s transient evolution.
[1] Thomas U. Otto,et al. Perceptual learning with spatial uncertainties , 2006, Vision Research.
[2] P. Dayan,et al. Nonlinear ideal observation and recurrent preprocessing in perceptual learning. , 2003, Network.
[3] Peter Dayan,et al. Position Variance, Recurrence and Perceptual Learning , 2000, NIPS.
[4] K. Zhang,et al. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory , 1996, The Journal of neuroscience : the official journal of the Society for Neuroscience.