Fault diagnosis of electrical power systems using incremental radial basis function nets
暂无分享,去创建一个
Most of the proposed neural networks for fault diagnosis of systems are multilayer perceptrons (MLP) employing the popular backpropagation (BP) learning rule. It has been shown that the backpropagation algorithm usually takes a long time for convergence and sometimes gets trapped into local minimum. The algorithm requires the architecture to be fixed initially (i.e. the number of hidden units) before learning begins. Final network size is obtained by repeated trials. When the size of the training set is large, especially in the case of fault diagnosis, such a repeated training consumes a large amount of time and sometimes it can be frustrating. Thus there is a need of a good neural network architecture that decides its size automatically while learning the input/output relationships and must posses reasonably good generalization. Neural networks based on radial basis functions (RBF) have emerged as potential alternatives to MLPs. RBFs have a simple architecture and they can learn the input/output relations fast compared to MLPs. In this paper we present a constructive neural network based on radial basis functions (RBF) due to Fritzke for classification of fault patterns in a model power system. The performance of this neural network with traditional BP network and nonconstructive RBF network in terms of size, learning speed and generalization are presented.
[1] Bernd Fritzke. Supervised Learning with Growing Cell Structures , 1993, NIPS.
[2] B. Kulicke,et al. Neural network approach to fault classification for high speed protective relaying , 1995 .
[3] Hong-Tzer Yang,et al. A new neural networks approach to on-line fault section estimation using information of protective relays and circuit breakers , 1994 .
[4] John Moody,et al. Fast Learning in Networks of Locally-Tuned Processing Units , 1989, Neural Computation.