Dual Weight Learning Vector Quantization

A new learning vector quantization (LVQ) approach, so-called dual weight learning vector quantization (DWLVQ), is presented in this paper. The basic idea is to introduce an additional weight (namely the importance vector) for each feature of reference vectors to indicate the importance of this feature during the classification. The importance vectors are adapted regarding the fitness of the respective reference vector over the training iteration. Along with the progress of the training procedure, the dual weights (reference vector and importance vector) can be adjusted simultaneously and mutually to improve the recognition rate eventually. Machine learning databases from UCI are selected to verify the performance of the proposed new approach. The experimental results show that DWLVQ can yield superior performance in terms of recognition rate, computational complexity and stability, compared with the other existing methods which including LVQ, generalized LVQ(GLVQ), relevance LVQ(RLVQ) and generalized relevance LVQ (GRLVQ).

[1]  Thomas Villmann,et al.  Generalized relevance learning vector quantization , 2002, Neural Networks.

[2]  M. V. Velzen,et al.  Self-organizing maps , 2007 .

[3]  Atsushi Sato,et al.  Generalized Learning Vector Quantization , 1995, NIPS.

[4]  Barbara Hammer,et al.  Relevance determination in Learning Vector Quantization , 2001, ESANN.

[5]  Teuvo Kohonen,et al.  The self-organizing map , 1990, Neurocomputing.

[6]  Teuvo Kohonen,et al.  Improved versions of learning vector quantization , 1990, 1990 IJCNN International Joint Conference on Neural Networks.