A parallel topological feature map in APL

One can distinguish two different approaches of neural networks the supervised networks and the self organizing or unsupervised neural networks. The first type of neural nets is supplied with an ideal result regarding the input. During the learning procedure, the neural net adjusts weighting factors of the links between neurons so that the input feature vectors map to the ideal output. Those nets are used for example in robotics, where the ideal result is well known: it is the position the robot should be placed in. For the cases where no ideal result is known, the second type of neural nets, the so called self-learning Topological Feature Map (TFM) is appropriate. This paper will introduce such a neural net based on the idea of Kohonen's TFM. The original algorithm was extremely sequential and therefore not suitable for an APL implementation. The parallelization of the algorithm led to important improvements in speed and convergence to the global optimum.