A Minimun Velocity Approach to Learning MPLab TR

Consider a deterministic recurrent neural network model of the form dxt/dt = w(θ − xt), where xt is a vector of neural activations, w is a fixed positive definite matrix of synaptic connections, and θ is an adaptive bias vector. Since the network is linear it is easy to find an analytical solution to the network activation process. In particular, limt→∞ xt = θ , i.e., as time progresses the network activations converge to θ. Suppose we want for this network to exhibit a pattern of activation ξ at equilibrium. The standard approach would be to minimize the difference between the desired and obtained equilibrium conditions, i.e, ‖θ− ξ‖ . The gradient of this cost function is proportional to (θ − ξ) and thus gradient descent learning would move θ in the direction of ξ, converging to θ = ξ. A disadvantage of this approach is that the training signals depend on equilibrium statistics. For linear networks this is not a problem because they can be obtained analytically. However, for the general case we would have to simulate the network numerically until equilibrium, a process that may be time consuming.