On the convergence of feedforward neural networks incoporating terminal attractors

Feed forward networks and the backpropagation algorithm are examined from the point of view of dynamical systems theory. A modification to the learning dynamic is investigated using the notion of a terminal attractor, i.e., a stable equilibrium solution that is guaranteed to be reached in finite time. It is found that, even though in theory convergence to a terminal attractor can be achieved within a very short span of the resulting trajectory, computing the trajectory in practice often requires higher numerical accuracy (than the standard algorithm), and thus smaller steps are taken along the trajectory at each iteration. It is shown that comparable improvements in convergence can be obtained by a simpler and computationally less expensive variant of the standard backpropagation algorithm which incorporates a dynamically varying learning rate.<<ETX>>