1. Introduction Current technology trends, demanding applications, and limitations of sequential computation, have combined to rejuvenate research into connectionist and neurally inspired computational mechanisms. A new class of multilayer connectionist systems, ASOCS (adaptive self-organizing concurrent systems), was created with the goal of fulfilling the desired functionality of connectionist methods, while overcoming some of the drawbacks of current models. A number of specific ASOCS architectures have been proposed [5,6] and VLSI implementation and testing are underway [2]. The goal of this chapter is to introduce the basic features and functional mechanisms of these new models. ASOCS (adaptive self-organizing concurrent systems) is similar to most decision-making neural network models in that it attempts to learn an adaptive set of arbitrary vector mappings. However, it differs dramatically in its mechanisms. ASOCS is based on networks of adaptive digital elements which self-modify using local information. Function specification is entered incrementally by use of rules, rather than complete input-output vectors, such that a processing network is able to extract critical features from a large environment and give output in a parallel fashion. Learning also uses parallelism and self-organization such that a new rule is completely learned in time linear with the depth of the network. The model guarantees learning of any arbitrary mapping of boolean input-output vectors. The model is also stable in that learning does not erase any previously learned mappings except those explicitly contradicted. The atomic knowledge input to the system is known as an instance and is made up of a variable length boolean input vector and an associated target output. The instance input vector typically contains only the critical features, (a small subset of the total environment input), sufficient to decide the current output. Thus, a single instance represents many possible states of the environment. Any propositional boolean function can be represented by a set of instances. Instances are input to and learned incrementally by the system. A new instance may contradict earlier instances, in which case only the contradicted portions of older instances optionally become invalid. A new instance is broadcast to a multilayer network of adaptive digital nodes. Using only local information (the internal state of the system or nodes is invisible to both the outside and inside), nodes adjust their functions and interconnectivity in a parallel and self-organizing fashion in order to maintain a network which consistently fulfills all instances received by the system. The network can build …
[1]
Rik Achiel Verstraete.
Assignment of functional responsibility in perceptrons
,
1986
.
[2]
A. A. Mullin,et al.
Principles of neurodynamics
,
1962
.
[3]
Teuvo Kohonen,et al.
Self-organization and associative memory: 3rd edition
,
1989
.
[4]
Tony R. Martinez,et al.
Adaptive Parallel Logic Networks
,
1988,
J. Parallel Distributed Comput..
[5]
Stephen Grossberg,et al.
A massively parallel architecture for a self-organizing neural pattern recognition machine
,
1988,
Comput. Vis. Graph. Image Process..
[6]
Stephen S. Yau,et al.
Universal logic circuits and their modular realizations
,
1968,
AFIPS '68 (Spring).
[7]
Tony R. Martinez,et al.
Digital Neural Networks
,
1988,
Proceedings of the 1988 IEEE International Conference on Systems, Man, and Cybernetics.
[8]
Tony R. Martinez,et al.
Adaptive self-organizing logic networks
,
1986
.