Unsupervised clustering and centroid estimation using dynamic competitive learning

In this paper, an unsupervised learning algorithm is developed. Two versions of an artificial neural network, termed a differentiator, are described. It is shown that our algorithm is a dynamic variation of the competitive learning found in most unsupervised learning systems. These systems are frequently used for solving certain pattern recognition tasks such as pattern classification and k-means clustering. Using computer simulation, it is shown that dynamic competitive learning outperforms simple competitive learning methods in solving cluster detection and centroid estimation problems. The simulation results demonstrate that high quality clusters are detected by our method in a short training time. Either a distortion function or the minimum spanning tree method of clustering is used to verify the clustering results. By taking full advantage of all the information presented in the course of training in the differentiator, we demonstrate a powerful adaptive system capable of learning continuously changing patterns.

[1]  G. G. Coghill,et al.  A mapping neural network using unsupervised and supervised training , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[2]  Stephen Grossberg,et al.  Competitive Learning: From Interactive Activation to Adaptive Resonance , 1987, Cogn. Sci..

[3]  Steven J. Nowlan,et al.  Maximum Likelihood Competitive Learning , 1989, NIPS.

[4]  C. Malsburg Self-organization of orientation sensitive cells in the striate cortex , 2004, Kybernetik.

[5]  G. G. Coghill,et al.  Tactile robot shape recognition using geometrical angle/length sequences , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[6]  John A. Hartigan,et al.  Clustering Algorithms , 1975 .

[7]  David Zipser,et al.  Feature Discovery by Competive Learning , 1986, Cogn. Sci..

[8]  Stanley C. Ahalt,et al.  Competitive learning algorithms for vector quantization , 1990, Neural Networks.

[9]  P. Földiák,et al.  Forming sparse representations by local anti-Hebbian learning , 1990, Biological Cybernetics.

[10]  C.H. Sequin,et al.  Optimal adaptive k-means algorithm with dynamic adjustment of learning rate , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[11]  Teuvo Kohonen,et al.  Self-organization and associative memory: 3rd edition , 1989 .

[12]  E. Kandel Small systems of neurons. , 1979, Scientific American.

[13]  H. de Garis 'COMPO' conceptual clustering with connectionist competitive learning , 1989 .

[14]  C. Sayers,et al.  A continuously adaptable artificial neural network , 1989 .

[15]  H. Grubmüller,et al.  Self-organization of associative memory and pattern classification: recurrent signal processing on topological feature maps , 1990, Biological Cybernetics.

[16]  G. G. Coghill,et al.  Dynamic competitive learning in the differentiator , 1991, 1991., IEEE International Sympoisum on Circuits and Systems.

[17]  G. G. Coghill,et al.  Dynamic competitive learning for centroid estimation , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[18]  John E. Moody,et al.  Fast adaptive k-means clustering: some empirical results , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[19]  Bart Kosko,et al.  Differential competitive learning for centroid estimation and phoneme recognition , 1991, IEEE Trans. Neural Networks.

[20]  Roman Bek,et al.  Discourse on one way in which a quantum-mechanics language on the classical logical base can be built up , 1978, Kybernetika.