To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive.
[1]
Daniel J. Amit,et al.
Modeling brain function: the world of attractor neural networks, 1st Edition
,
1989
.
[2]
Prof. Dr. Dr. Valentino Braitenberg,et al.
Cortex: Statistics and Geometry of Neuronal Connectivity
,
1998,
Springer Berlin Heidelberg.
[3]
Gesine Reinert,et al.
Small worlds
,
2001,
Random Struct. Algorithms.
[4]
R. Palmer,et al.
Introduction to the theory of neural computation
,
1994,
The advanced book program.
[5]
D. O. Hebb,et al.
The organization of behavior
,
1988
.
[6]
M. Mézard,et al.
Spin Glass Theory and Beyond
,
1987
.