Backpropagation in non-feedforward networks

Backpropagation is a powerful supervised learning rule for networks with hidden units. However, as originally introduced, and as described in Chapter 4, it is limited to feedforward networks. In this chapter we derive the generalization of backpropagation to non-feedforward networks. This generalization happens to take a very simple form: the error propagation network can be obtained simply by linearizing, and then transposing, the network to be trained. Networks with feedback necessarily raise the problem of stability. We prove that the error propagation network is always stable when training is performed. We also derive a sufficient condition for the stability of the non-feedforward neural network, and we discuss the problem of the possible existence of multiple stable states. Finally, we present some experimental results on the use of back propagation in non-feedforward networks.