Fault-tolerance of associative memories based on neural networks

The authors discuss the hardware fault tolerance of associative memories. They study device parameter variations across a chip, which affect essentially the characteristics of analog circuits used by several architectures. The effects of these errors on the performance are examined by means of three typical representatives: the Hopfield model, the self-organizing feature map, and the Boltzmann machine. The authors present a worst-case estimation of the guaranteed fault tolerance of these networks and discuss the consequences for the features of the associative memories. The main result is that the fault tolerance decreases with the number of weights but can be improved by using spare codes or self-organization in connection with added resources.<<ETX>>

[1]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[2]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[3]  U. Ruckert,et al.  VLSI architectures for associative networks , 1988, 1988., IEEE International Symposium on Circuits and Systems.

[4]  Lawrence D. Jackel,et al.  VLSI implementation of a neural network model , 1988, Computer.