MultiLearner Based Recursive Supervised Training

In supervised learning, most single solution neural networks such as constructive backpropagation give good results when used with some datasets but not with others. Others such as probabilistic neural networks (PNN) fit a curve to perfection but need to be manually tuned in the case of noisy data. Recursive percentage based hybrid pattern training (RPHP) overcomes this problem by recursively training subsets of the data, thereby using several neural networks. MultiLearner based recursive training (MLRT) is an extension of this approach, where a combination of existing and new learners are used and subsets are trained using the weak learner which is best suited for this subset. We observed that empirically, MLRT performs considerably well as compared to RPHP and other systems on benchmark data with 11% improvement in accuracy on the spam dataset and comparable performances on the vowel and the two-spiral problems

[1]  Esa Alhoniemi,et al.  Clustering of the self-organizing map , 2000, IEEE Trans. Neural Networks Learn. Syst..

[2]  Anil K. Jain,et al.  Data clustering: a review , 1999, CSUR.

[3]  Arthur Flexer,et al.  On the use of self-organizing maps for clustering and visualization , 1999, Intell. Data Anal..

[4]  Robert E. Schapire,et al.  Using output codes to boost multiclass learning problems , 1997, ICML.

[5]  Miin-Shen Yang,et al.  Alternative c-means clustering algorithms , 2002, Pattern Recognit..

[6]  J. D. Schaffer,et al.  Combinations of genetic algorithms and neural networks: a survey of the state of the art , 1992, [Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks.

[7]  K JainAnil,et al.  Combining Multiple Clusterings Using Evidence Accumulation , 2005 .

[8]  David J. Montana,et al.  A Weighted Probabilistic Neural Network , 1991, NIPS.

[9]  Gunnar Rätsch,et al.  An Introduction to Boosting and Leveraging , 2002, Machine Learning Summer School.

[10]  George Karypis,et al.  A Comparison of Document Clustering Techniques , 2000 .

[11]  Ana L. N. Fred,et al.  Combining multiple clusterings using evidence accumulation , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Jooyoung Park,et al.  Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.

[13]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[14]  Anil K. Jain,et al.  Adaptive clustering ensembles , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[15]  Steven Guan,et al.  Parallel growing and training of neural networks using output parallelism , 2002, IEEE Trans. Neural Networks.

[16]  David M. Mount,et al.  The analysis of a simple k-means clustering algorithm , 2000, SCG '00.

[17]  Lawrence Davis,et al.  Training Feedforward Neural Networks Using Genetic Algorithms , 1989, IJCAI.

[18]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[19]  Shang-Liang Chen,et al.  Orthogonal least squares learning algorithm for radial basis function networks , 1991, IEEE Trans. Neural Networks.

[20]  Donald F. Specht,et al.  Probabilistic neural networks , 1990, Neural Networks.

[21]  Kiruthika Ramanathan,et al.  Recursive percentage based hybrid pattern (RPHP) training for curve fitting , 2004, IEEE Conference on Cybernetics and Intelligent Systems, 2004..

[22]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[23]  Hajime Kita,et al.  A parallel and modular multi-sieving neural network architecture for constructive learning , 1995 .

[24]  Mikko Lehtokangas Modelling with constructive backpropagation , 1999, Neural Networks.

[25]  Chunyu Bao,et al.  Task Decomposition Using Pattern Distributor , 2004 .

[26]  T. Ash,et al.  Dynamic node creation in backpropagation networks , 1989, International 1989 Joint Conference on Neural Networks.