Experiments for the Number of Clusters in K-Means

K-means is one of the most popular data mining and unsupervised learning algorithms that solve the well known clustering problem. The procedure follows a simple and easy way to classify a given data set through a pre-specified number of clusters K, therefore the problem of determining "the right number of clusters" has attracted considerable interest. However, to the authors' knowledge, no experimental results of their comparison have been reported so far. This paper presents results of such a comparison involving eight selection options presenting four approaches. We generate data according to a Gaussian-mixture distribution with clusters' spread and spatial sizes variant. Most consistent results are shown by the least squares and least modules version of an intelligent version of the method, iK-Means by Mirkin [14]. However, the right K is reproduced best by the Hartigan's [5] method. This leads us to propose an adjusted iK-Means method, which performs well in the current experiment setting.

[1]  Michael E. Tipping,et al.  Probabilistic Principal Component Analysis , 1999 .

[2]  Robert Tibshirani,et al.  Estimating the number of clusters in a data set via the gap statistic , 2000 .

[3]  J. MacQueen Some methods for classification and analysis of multivariate observations , 1967 .

[4]  Jill P. Mesirov,et al.  Consensus Clustering: A Resampling-Based Method for Class Discovery and Visualization of Gene Expression Microarray Data , 2003, Machine Learning.

[5]  Ka Yee Yeung,et al.  Principal component analysis for clustering gene expression data , 2001, Bioinform..

[6]  Boris Mirkin,et al.  Clustering For Data Mining: A Data Recovery Approach (Chapman & Hall/Crc Computer Science) , 2005 .

[7]  Anil K. Jain,et al.  Algorithms for Clustering Data , 1988 .

[8]  G. W. Milligan,et al.  An examination of procedures for determining the number of clusters in a data set , 1985 .

[9]  B. Mirkin Eleven Ways to Look at the Chi-Squared Coefficient for Contingency Tables , 2001 .

[10]  Sam T. Roweis,et al.  EM Algorithms for PCA and SPCA , 1997, NIPS.

[11]  Geoffrey J. McLachlan,et al.  Mixture models : inference and applications to clustering , 1989 .

[12]  Catherine A. Sugar,et al.  Finding the Number of Clusters in a Dataset , 2003 .

[13]  T. Caliński,et al.  A dendrite method for cluster analysis , 1974 .

[14]  John A. Hartigan,et al.  Clustering Algorithms , 1975 .

[15]  A. Raftery,et al.  Model-based Gaussian and non-Gaussian clustering , 1993 .

[16]  Ito Wasito,et al.  Nearest neighbours in least-squares data imputation algorithms with different missing patterns , 2006, Comput. Stat. Data Anal..

[17]  W. Krzanowski,et al.  A Criterion for Determining the Number of Groups in a Data Set Using Sum-of-Squares Clustering , 1988 .

[18]  Peter J. Rousseeuw,et al.  Finding Groups in Data: An Introduction to Cluster Analysis , 1990 .