Recursive hybrid decomposition with reduced pattern training

When neural networks are applied to large scale real-world classification problems, a major drawback is its inefficiency in utilizing network resources. A natural approach to overcome this drawback is to decompose the problem into several smaller sub-problems based on the "divide-and-conquer" methodology. This paper presents a hybrid method of task decomposition - OP-RPHP (Output Parallelism with Recursive Percentage-based Hybrid Pattern training). OP-RPHP employs a combination of both class decomposition and domain decomposition in its architecture thereby incorporating the advantages of both methods. OP-RPHP can be grown and trained in parallel on separate processing units to improve training time. To further improve the training time, a reduced pattern training algorithm is introduced. The reduction parameter p associated with the reduced pattern training algorithm is optimized to obtain maximum reduction in training time without compromising classification accuracy. Our approach is tested on four benchmark classification problems retrieved from the UCI repository of machine learning databases. The results show that OP-RPHP with reduced pattern training outperformed conventional OP and RPHP algorithms in both classification accuracy and training times.

[1]  Steven Guan,et al.  Enhancing Recursive Supervised Learning Using Clustering and Combinatorial Optimization (RSL-CC) , 2008, Engineering Evolutionary Intelligent Systems.

[2]  Mikko Lehtokangas Modelling with constructive backpropagation , 1999, Neural Networks.

[3]  Hajime Kita,et al.  A multi-sieving neural network architecture that decomposes learning tasks automatically , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[4]  Kiruthika Ramanathan,et al.  Recursive percentage based hybrid pattern (RPHP) training for curve fitting , 2004, IEEE Conference on Cybernetics and Intelligent Systems, 2004..

[5]  Steven Guan,et al.  Task decomposition based on output parallelism , 2001, 10th IEEE International Conference on Fuzzy Systems. (Cat. No.01CH37297).

[6]  Peter Ross,et al.  Dynamic Training Subset Selection for Supervised Learning in Genetic Programming , 1994, PPSN.

[7]  Abdullah Al Mamun,et al.  Interference-less neural network training , 2008, Neurocomputing.

[8]  Tomas Hrycej,et al.  A modular architecture for efficient learning , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[9]  Steven Guan,et al.  Incremental Learning with Respect to New Incoming Input Attributes , 2004, Neural Processing Letters.

[10]  Amanda J. C. Sharkey,et al.  Modularity, Combining and Artificial Neural Nets , 1997, Connect. Sci..

[11]  Kiruthika Ramanathan,et al.  Percentage-Based Hybrid Pattern Training with Neural Network Specific Cross Over , 2007 .

[12]  Kishan G. Mehrotra,et al.  Efficient classification for multiclass problems using modular neural networks , 1995, IEEE Trans. Neural Networks.

[13]  Kishan G. Mehrotra,et al.  An improved algorithm for neural network classification of imbalanced training sets , 1993, IEEE Trans. Neural Networks.

[14]  Hsin-Chia Fu,et al.  A divide-and-conquer methodology for modular supervised neural network design , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[15]  Giles M. Foody Issues in training set selection and refinement for classification by a feedforward neural network , 1998, IGARSS '98. Sensing and Managing the Environment. 1998 IEEE International Geoscience and Remote Sensing. Symposium Proceedings. (Cat. No.98CH36174).

[16]  P. Liang Problem decomposition and subgoaling in artificial neural networks , 1990, 1990 IEEE International Conference on Systems, Man, and Cybernetics Conference Proceedings.

[17]  Steven Guan,et al.  Output partitioning of neural networks , 2005, Neurocomputing.

[18]  Wolfgang Banzhaf,et al.  Dynamic Subset Selection Based on a Fitness Case Topology , 2004, Evolutionary Computation.

[19]  L. Su,et al.  Incremental Self-Growing Neural Networks with the Changing Environment , 2001 .

[20]  Fangming Zhu,et al.  Class decomposition for GA-based classifier agents - a Pitt approach , 2004, IEEE Trans. Syst. Man Cybern. Part B.

[21]  Peng Li,et al.  A Hierarchical Incremental Learning Approach to Task Decomposition , 2002 .

[22]  Steven Guan,et al.  Parallel growing and training of neural networks using output parallelism , 2002, IEEE Trans. Neural Networks.