Evolutionary Learning of Recurrent Networks by Successive Orthogonal Inverse Approximations

Recurrent networks have proved to be more powerful than feedforward neural networks in terms of classes of functions they can compute. But, because training of recurrent networks is a difficult task, it is not clear that these networks provide an advantage over feedforward networks for learning from examples. This communication proposes a general computation model that lays the foundations for characterizing the classes of functions computed by feedforward nets and convergent recurrent nets. Then a mathematical statement proves that convergent nets outperform feedforward nets on data fitting problems. It provides the basis to devise a new learning procedure that constraints the attractor set of a recurrent net and assures a convergent dynamic by using orthogonal inverse tools. The learning algorithm is based on an evolutionary selection mechanism. Using the previous procedure as evaluation function, it has been shown to be robust and well adapted to train convergent recurrent nets when feedforward nets cannot approximate a real parameter mapping.