Computation on the Transient

Line attractor networks have become standard workhorses of computational accounts of neural population processing for optimal perceptual inference, working memory, decision making and more. Such networks are defined by possessing a one(line) or multi-dimensional (surface) manifold in the high dimensional space of the activities of all the neurons in the net, to a point on which the state of the network is projected in a non-linear manner by the network’s dynamics. The standard view that the network represents information by the location of the point on this manifold at which it sits [1] is only appropriate if the computation to be performed by the network is aligned with the underlying symmetry implied by the manifold. In interesting cases, the computation that must be performed is orthogonal to this symmetry structure, and so an alternative computational view is required. Here, we illustrate the problem using a well-studied visual hyperacuity task, and suggest solutions involving different classes of computations during the network’s transient evolution.