Reduced Multi-class Contour Preserving Classification

This research presents the augmentation of the original contour preserving classification technique to support multi-class data and to reduce the number of synthesized vectors, called multi-class outpost vectors (MCOVs). The technique has been proven to function on both synthetic-problem data sets and real-world data sets correctly. The technique also includes three methods to reduce the number of MCOVs by using minimum vector distance selection between fundamental multi-class outpost vectors and additional multi-class outpost vectors to select only MCOVs located at the decision boundary between consecutive classes of data. The three MCOV reduction methods include the FF-AA reduction method, the FA-AF reduction method, and the FAF-AFA reduction method. An evaluation has been conducted to show the reduction capability, the contour preservation capability, and the levels of classification accuracy of the three MCOV reduction methods on both non-overlapping and highly overlapping synthetic-problem data sets and highly overlapping real-world data sets. For non-overlapping problems, the experimental results present that the FA-AF reduction method can partially reduce the number of MCOVs while preserving the contour of the problem most accurately and obtaining similar levels of classification accuracy as when the whole set of MCOVs is used. For highly overlapping problems, the experimental results present that the FF-AA reduction method can partially reduce the number of MCOVs while preserving the contour of the problem most accurately and obtaining similar levels of classification accuracy as when the whole set of MCOVs is used.

[1]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[2]  J. Nazuno Haykin, Simon. Neural networks: A comprehensive foundation, Prentice Hall, Inc. Segunda Edición, 1999 , 2000 .

[3]  Michel Verleysen,et al.  Enhanced learning for evolutive neural architectures , 1995 .

[4]  Piyabute Fuangkhon,et al.  An incremental learning algorithm for supervised neural network with contour preserving classification , 2009, 2009 6th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology.

[5]  G. Gates The Reduced Nearest Neighbor Rule , 1998 .

[6]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[7]  Shoichi Noguchi,et al.  Improving generalization of a well trained network , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[8]  Piyabute Fuangkhon An incremental learning preprocessor for feed-forward neural network , 2011, Artificial Intelligence Review.

[9]  Chidchanok Lursinsap,et al.  Fault immunization technique for artificial neural networks , 1997, Proceedings of International Conference on Neural Networks (ICNN'97).

[10]  Peter E. Hart,et al.  The condensed nearest neighbor rule (Corresp.) , 1968, IEEE Trans. Inf. Theory.

[11]  Michael Negnevitsky,et al.  Artificial Intelligence: A Guide to Intelligent Systems , 2001 .

[12]  J. van Leeuwen,et al.  Intelligent Data Engineering and Automated Learning , 2003, Lecture Notes in Computer Science.

[13]  Robert Chun,et al.  Immunization of neural networks against hardware faults , 1990, IEEE International Symposium on Circuits and Systems.

[14]  J. I. Minnix Fault tolerance of the backpropagation neural network trained on noisy inputs , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[15]  Belur V. Dasarathy,et al.  Nearest Neighbour Editing and Condensing Tools–Synergy Exploitation , 2000, Pattern Analysis & Applications.

[16]  Hugh B. Woodruff,et al.  An algorithm for a selective nearest neighbor decision rule (Corresp.) , 1975, IEEE Trans. Inf. Theory.

[17]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[18]  Piyabute Fuangkhon,et al.  A Dual Network Adaptive Learning Algorithm for Supervised Neural Network with Contour Preserving Classification for Soft Real Time Applications , 2010, HAIS.

[19]  Leo Breiman,et al.  HALF&HALF BAGGING AND HARD BOUNDARY POINTS , 1998 .

[20]  Piyabute Fuangkhon,et al.  Multi-class Contour Preserving Classification , 2012, IDEAL.

[21]  Piyabute Fuangkhon,et al.  An Adaptive Learning Algorithm for Supervised Neural Network with Contour Preserving Classification , 2009, AICI.

[22]  Chidchanok Lursinsap,et al.  Contour preserving classification for maximal reliability , 1998, 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98CH36227).

[23]  R.D. Clay,et al.  Fault tolerance training improves generalization and robustness , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[24]  Dennis L. Wilson,et al.  Asymptotic Properties of Nearest Neighbor Rules Using Edited Data , 1972, IEEE Trans. Syst. Man Cybern..

[25]  G. Gates,et al.  The reduced nearest neighbor rule (Corresp.) , 1972, IEEE Trans. Inf. Theory.

[26]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[27]  C. G. Hilborn,et al.  The Condensed Nearest Neighbor Rule , 1967 .