Backpropagation in perceptrons with feedback

Backpropagation has shown to be an efficient learning rule for graded perceptrons. However, as initially introduced, it was limited to feedforward structures. Extension of backpropagation to systems with feedback was done by this author, in [4]. In this paper, this extension is presented, and the error propagation circuit is interpreted as the transpose of the linearized perceptron network. The error propagation network is shown to always be stable during training, and a sufficient condition for the stability of the perceptron network is derived. Finally, potentially useful relationships with Hopfield networks and Boltzmann machines are discussed.