A dynamic approach to learning vector quantization

Learning vector quantization networks are generally considered a powerful pattern recognition tool. Their main drawback, however, is the competitive learning algorithm they are based upon, that suffers of the so called underutilized or dead unit problem. To solve this problem, algorithms substantially based on a modified distance calculation, such as the frequency sensitive competitive learning (FSCL), have been proposed, but their attainable performance strongly depends on the selection of an appropriate number of neurons. This choice generally require knowledge about the number of clusters in the feature space. We propose a new supervised training algorithm for LVQ neural networks, which provide the optimal number of neurons for each class by dynamically adding or removing neurons on the basis of a measure of their performance. The experimental results, performed on different databases of synthetic data, confirmed the effectiveness of our approach.