Robustness of Hebbian and anti-Hebbian learning

Fault tolerance of artificial neural networks (ANNs) has been studied mostly for passive systems, that does not react in any special way to compensate for the effect of internal failures. Systems with active fault-tolerance reorganize their resources to counteract the fault effects. Studied examples describe adaptation or retraining after internal faults. Other examples suggest prewired selfrepair mechanisms. In this paper, the the fault tolerance of a self-organizing Hebbian and anti-Hebbian (HAH) network is studied. In the case of self-organized learning the question of 'performance' arises, since the network always does something. The authors' starting point is that HAH networks perform soft competition and thus HAH neurons search and compete for high order correlations and divide the 'world' between themselves. In this sense the network should provide a 'quasi-orthogonal representation' and network performance may be judged by considering orthogonality of neural filter vectors. Different learning algorithms will perform in a different fashion, since orthogonality of receptive fields depend strongly on, for example, the postsynaptic or presynaptic nature of learning. A geometrical problem-forming spatial filters-is studied, since it offers easy judgement. In addition, the authors restrict their studies to cases where the networks were started from 'scratch'. Neural network parameters, such as learning rates, neural activities, sharpness of nonlinearities are considered different for different neurons. >