Demands for applications requiring massive parallelism in symbolic environments have given rebirth to research in models labeled as neural networks. These models are made up of many simple nodes which are highly interconnected such that computation takes place as data flows amongst the nodes of the network. To present, most models have proposed nodes based on simple analog functions, where inputs are multiplied by weights and summed, the total then optionally being transformed by an arbitrary function at the node. Learning in these systems is accomplished by adjusting the weights on the input lines. This paper discusses the use of digital (boolean) nodes as a primitive building block in connectionist systems. Digital nodes naturally engender new paradigms and mechanisms for learning and processing in connectionist networks. The digital nodes are used as the basic building block of a class of models called ASOCS (Adaptive Self-Organizing Concurrent Systems). These models combine massive parallelism with the ability to adapt in a self-organizing fashion. Basic features of standard neural network learning algorithms and those proposed using digital nodes are compared and contrasted. The latter mechanisms can lead to vastly improved efficiency for many applications.
[1]
B. Widrow,et al.
Generalization and information storage in network of adaline 'neurons'
,
1962
.
[2]
A. A. Mullin,et al.
Principles of neurodynamics
,
1962
.
[3]
Tony R. Martinez,et al.
Adaptive self-organizing logic networks
,
1986
.
[4]
Rik Achiel Verstraete.
Assignment of functional responsibility in perceptrons
,
1986
.
[5]
T. Martinez.
Models of Parallel Adaptive Logic
,
1987
.
[6]
Tony R. Martinez,et al.
Adaptive Parallel Logic Networks
,
1988,
J. Parallel Distributed Comput..