Fault diagnosis of electrical power systems using incremental radial basis function nets

Most of the proposed neural networks for fault diagnosis of systems are multilayer perceptrons (MLP) employing the popular backpropagation (BP) learning rule. It has been shown that the backpropagation algorithm usually takes a long time for convergence and sometimes gets trapped into local minimum. The algorithm requires the architecture to be fixed initially (i.e. the number of hidden units) before learning begins. Final network size is obtained by repeated trials. When the size of the training set is large, especially in the case of fault diagnosis, such a repeated training consumes a large amount of time and sometimes it can be frustrating. Thus there is a need of a good neural network architecture that decides its size automatically while learning the input/output relationships and must posses reasonably good generalization. Neural networks based on radial basis functions (RBF) have emerged as potential alternatives to MLPs. RBFs have a simple architecture and they can learn the input/output relations fast compared to MLPs. In this paper we present a constructive neural network based on radial basis functions (RBF) due to Fritzke for classification of fault patterns in a model power system. The performance of this neural network with traditional BP network and nonconstructive RBF network in terms of size, learning speed and generalization are presented.