Evolutionary strategy for classification problems and its application in fault diagnostics

Abstract Genetic algorithms (GAs) based evolutionary strategy is proposed for classification problems, which includes two aspects: evolutionary selection of the training samples and input features, and evolutionary construction of the neural network classifier. For the first aspect, the GA based k -means-type algorithm (GKMT) is proposed, which combines GA and k -means-type (KMT) to achieve the optimal selection of the training samples and input features simultaneously. By this algorithm, the “singular” samples will be eliminated according to the classification accuracy and the features that facilitate the classification will be enhanced. On the opposite, the useless features will be suppressed and even eliminated. For the second aspect, the hierarchical evolutionary strategy is proposed for the construction and training of the neural network classifier (HENN). This strategy uses the hierarchical chromosome to encode the structure and parameters of the neural network into control genes and parameter genes respectively, designs and trains the network simultaneously. Finally, the experimental study pertained to the fault diagnostics for the rotor-bearing system is given and the results presented show that the proposed evolutionary strategy for the classification problem is feasible and effective.

[1]  Peter J. Angeline,et al.  An evolutionary algorithm that constructs recurrent neural networks , 1994, IEEE Trans. Neural Networks.

[2]  Jocelyn Sietsma,et al.  Creating artificial neural networks that generalize , 1991, Neural Networks.

[3]  Laveen N. Kanal,et al.  Classification, Pattern Recognition and Reduction of Dimensionality , 1982, Handbook of Statistics.

[4]  W. M. Jenkins,et al.  PLANE FRAME OPTIMUM DESIGN ENVIRONMENT BASED ON GENETIC ALGORITHM , 1992 .

[5]  Kevin M. Passino,et al.  Genetic adaptive identification and control , 1999 .

[6]  Somnath Mukhopadhyay,et al.  A polynomial time algorithm for the construction and training of a class of multilayer perceptrons , 1993, Neural Networks.

[7]  Ju-Jang Lee,et al.  Adaptive simulated annealing genetic algorithm for system identification , 1996 .

[8]  Kenji Fukumizu,et al.  Local minima and plateaus in multilayer neural networks , 1999 .

[9]  Z SelimShokri,et al.  K-Means-Type Algorithms , 1984 .

[10]  Xiao-Hu Yu,et al.  Can backpropagation error surface not have local minima , 1992, IEEE Trans. Neural Networks.

[11]  Marek Reformat,et al.  Application of genetic algorithms to pattern recognition of defects in GIS , 2000 .

[12]  Marcus Frean,et al.  The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks , 1990, Neural Computation.

[13]  Shokri Z. Selim,et al.  K-Means-Type Algorithms: A Generalized Convergence Theorem and Characterization of Local Optimality , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  X H Yu,et al.  On the local minima free condition of backpropagation learning , 1995, IEEE Trans. Neural Networks.

[15]  S. C. Ng,et al.  A weight evolution algorithm for finding the global minimum of error function in neural networks , 2000, Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512).

[16]  Yvan R. Petillot,et al.  Image processing optimization by genetic algorithm with a new coding scheme , 1995, Pattern Recognit. Lett..

[17]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[18]  Terrence L. Fine,et al.  Feedforward Neural Network Methodology , 1999, Information Science and Statistics.

[19]  Andy J. Keane,et al.  Topology Design of Feedforward Neural Networks by Genetic Algorithms , 1996, PPSN.