Training multi-loop networks

In this paper we investigate the training of time-lagged recurrent networks having multiple feedback paths and tapped-delay inputs. Network structures of this type are useful in approximating nonlinear dynamical systems. The introduction of additional feedback loops into a network structure may improve the modeling capability of the network, but a significant price can be paid in complexity and computational burden when calculating the dynamic derivatives needed for training. The focus of this paper is on the calculation of the dynamic derivatives which must be determined or approximated in order to use any of the popular methods employed in training neural networks. In this paper we illustrate the effect of multiple feedback loops on the formulation of the equations needed for calculating the dynamic derivatives. We also investigate the effect on network performance and computational complexity when various dynamic derivative approximations are used in training multiple feedback loop networks.