The autoregressive backpropagation algorithm

Describes an extension to error backpropagation that allows the nodes in a neural network to encode state information in an autoregressive 'memory'. This neural model gives such networks the ability to learn to recognize sequences and context-sensitive patterns. Building upon the work of A. Wieland (1990) concerning nodes with a single feedback connection, the authors generalize the method to n feedback connections and address stability issues. The learning algorithm is derived, and a few applications are presented.<<ETX>>