Role of Initialization in Som Networks - Study of Self-Similar Curve Topologies

This work investigates the initialization process in SOM. This is of importance because of the issue of network linearity and, subsequently, the quality of the produced map. We discuss 1D classical SOM, i.e. the algorithm presented by Kohonen, and experiment with three different approaches to initialization random, random with training or priming, and using self-similar curves to initially position the neurons. Our results show that, while the network will eventually untangle when random initialization is used, this will occur at the 100,000+ epoch. With priming or self-similar curves, the final, linear map is produced much earlier, i.e. 10,000 epoch at the most. The benefits of this are obvious with significantly reduced time to produce a usable map of the input space.