Improving convergence and performance of Kohonen's self-organizing scheme

Kohonen-like clustering algorithms (e.g., learning vector quantization) suffer from several major problems. For this class of algorithms, output often depends on the initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such an algorithm, even if it terminates, may not produce meaningful results in terms of prototypes for clustering. This is because it updates only the winner prototype with every input vector. In this paper we propose a generalization of learning vector quantization (which we shall call a Kohonen clustering network or KCN) which, unlike other methods, updates all the nodes with each input vector. Moreover, the network attempts to find a minimum of a well defined objective function. The learning rules depend on the degree of match to the winner node; the lesser the degree of match with the winner, the more is the impact on nonwinner nodes. Our numerical results show that the generated prototypes do not depend on the initialization, learning coefficient, or the number of iterations (provided KCN runs for at least 200 passes through the data). We use Anderson's IRIS data to illustrate our method; and we compare our results with the standard Kohonen approach.