Lyapunov stability-based adaptive backpropagation for discrete time system

Lyapunov stability-based adaptive backpropagation (LABP) for discrete systems is proposed in this paper. It can be applied to various aspects of adaptive signal processing. A Lyapunov function of the error between the desired and actual outputs of the neural network is first defined. Then the error is backward-propagated based on Lyapunov stability theory so that it can be used to adaptively adjust the weights of the inner layers of the neural networks. Subsequently, this will lead to an error between the desired and actual outputs converging to zero asymptotically. The proposed scheme possesses distinct advantages over the conventional BP by assuring that the system will not get stuck in local minima. Furthermore, this scheme has a faster convergence property and the stability is guaranteed by Lyapunov stability theory. A simulation example is performed to support the proposed scheme.