Incremental Maximum Gaussian Mixture Partition For Classification

In the field of classification, the main task of most algorithms is to find a perfect decision boundary. However, most decision boundaries are too complex to be discovered directly. Therefore, in this paper, we proposed an Incremental Maximum Gaussian Mixture Partition (IMGMP) algorithm for classification, aiming to solve those problems with complex decision boundaries. As a self-adaptive algorithm, it uses a divide and conquer strategy to calculate out a reasonable decision boundary by step. An Improved K-means clustering and a Maximum Gaussian Mixture model are used in the classifier. This algorithm also has been tested on artificial and real-life datasets in order to evaluate its remarkable flexibility and robustness.

[1]  S. Sathiya Keerthi,et al.  Which Is the Best Multiclass SVM Method? An Empirical Study , 2005, Multiple Classifier Systems.

[2]  Stuart I. Reynolds,et al.  Adaptive Resolution Model-Free Reinforcement Learning: Decision Boundary Partitioning , 2000, International Conference on Machine Learning.

[3]  Thomas G. Dietterich Machine Learning for Sequential Data: A Review , 2002, SSPR/SPR.

[4]  Steven Guan,et al.  Incremental Hyper-Sphere Partitioning for Classification , 2014, Int. J. Appl. Evol. Comput..

[5]  Steven Guan,et al.  A PSO Based Incremental Hyper-Sphere Partitioning Approach to Classification , 2014, Int. J. Appl. Evol. Comput..

[6]  Neil D. Lawrence,et al.  Dataset Shift in Machine Learning , 2009 .

[7]  Jitendra Malik,et al.  SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[8]  David A. Landgrebe,et al.  Decision boundary feature extraction for nonparametric classification , 1993, IEEE Trans. Syst. Man Cybern..

[9]  Sayan Mukherjee,et al.  Feature Selection for SVMs , 2000, NIPS.

[10]  Jf Baldwin,et al.  An Introduction to Fuzzy Logic Applications in Intelligent Systems , 1992 .

[11]  Anil K. Jain Data clustering: 50 years beyond K-means , 2008, Pattern Recognit. Lett..

[12]  G. De Soete,et al.  Clustering and Classification , 2019, Data-Driven Science and Engineering.

[13]  Mehmet Fatih Akay,et al.  Support vector machines combined with feature selection for breast cancer diagnosis , 2009, Expert Syst. Appl..

[14]  Bernhard E. Boser,et al.  A training algorithm for optimal margin classifiers , 1992, COLT '92.

[15]  B. Ripley,et al.  Pattern Recognition , 1968, Nature.

[16]  Claire Cardie,et al.  Proceedings of the Eighteenth International Conference on Machine Learning, 2001, p. 577–584. Constrained K-means Clustering with Background Knowledge , 2022 .

[17]  David A. Landgrebe,et al.  Decision boundary feature extraction for neural networks , 1997, IEEE Trans. Neural Networks.

[18]  Jing Wang,et al.  Fast approximate k-means via cluster closures , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[19]  Chin-Hui Lee,et al.  Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains , 1994, IEEE Trans. Speech Audio Process..

[20]  Steven Guan,et al.  Incremental Hyperplane Partitioning for Classification , 2013, Int. J. Appl. Evol. Comput..

[21]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[22]  James C. Bezdek,et al.  An integrated approach to fuzzy learning vector quantization and fuzzy c-means clustering , 1997, IEEE Trans. Fuzzy Syst..

[23]  David J. Spiegelhalter,et al.  Machine Learning, Neural and Statistical Classification , 2009 .