Associative Networks (ANs) and other similar information processing networks now experience an increased interest within different areas of computer science as well as within microelectronics. Many different models of such networks have been discussed in the branch of neuro-science (Anderson and Rosenfeld 1987). Software simulations have shown that these ANs store and process information effectively. The following attractive characteristics of these networks are based on the distribution of processing power amongst the data storage devices to minimize data movement:
Associative recall of information means the reconstruction of stored patterns if the input only offers a portion or a noisy version of these patterns.
Tolerance towards failures in the hardware means that losses of devices in the network cause only a slight decrease in the accuracy of the recall process, but do not affect the total function of the network.
Parallel processing offers a concept at which every device in the network is doing something useful during every operation whereas in a conventional microcomputer a fast processor performs instructions very quickly, but the memory is idle during any instruction cycle.
[1]
U. Ruckert,et al.
VLSI architectures for associative networks
,
1988,
1988., IEEE International Symposium on Circuits and Systems.
[2]
Karl Goser,et al.
Adaptive Associative Systems For VLSI
,
1987
.
[3]
Karl Goser,et al.
Intelligent memories in VLSI
,
1984,
Inf. Sci..
[4]
Karl Goser,et al.
Adaptive Associate Systems for VLSI
,
1986,
WOPPLOT.
[5]
Teuvo Kohonen,et al.
Self-Organization and Associative Memory
,
1988
.
[6]
Jörg D. Becker,et al.
WOPPLOT 86 Parallel Processing: Logic, Organization, and Technology
,
1986,
Lecture Notes in Computer Science.