Self-Organizing Neural Networks

Learning algorithms which were considered for a single perceptron, linear adaline, and multilayer perceptron belong to the class of supervised learning algorithms. In this case the training data is divided into input signals, x(n), and target signals, d(n). A typical learning algorithm is driven by error signals ε(n) which are the differences between the actual network output, y(n), and the desire (or target) output for a given input. For a pattern learning, we can express the weight update in the following general form ∆w(n) = L(w(n), x(n), ε(n)) where L represents a learning algorithm. If we say that a neural network can describe a model of data, then a multilayer perceptron describes the data in a form of a hypersurface which approximates a functional relationship between x(n), and d(n).