We propose a learning architecture for growing complex artificial neural networks. The complexity of the growing network is adapted automatically according to the complexity of the task. The algorithm generates a feedforward network bottom up by cyclically inserting cascaded hidden layers. Inputs of a hidden layer unit are locally restricted with respect to the input space by using a new kind of activation function. Contrary to the cascade-correlation learning architecture, we introduce different correlation measures to train the network units featuring different goals. The task decomposition between subnetworks is done by maximizing the anticorrelation between the hidden layer units output and a connection routing algorithm between the hidden layers. These features resembles the TACOMA (task decomposition, correlation measures and local attention neurons) learning architecture. Results are shown for two difficult to solve problems in comparison to those produced by the CASCOR algorithm.<<ETX>>
[1]
K. Lang,et al.
Learning to tell two spirals apart
,
1988
.
[2]
Teuvo Kohonen,et al.
Self-Organization and Associative Memory, Third Edition
,
1989,
Springer Series in Information Sciences.
[3]
Christian Lebiere,et al.
The Cascade-Correlation Learning Architecture
,
1989,
NIPS.
[4]
Helge J. Ritter,et al.
Generalization Abilities of Cascade Network Architecture
,
1992,
NIPS.
[5]
Michael I. Jordan,et al.
Hierarchies of Adaptive Experts
,
1991,
NIPS.
[6]
E I. Smieja,et al.
Reflective Modular Neural Network Systems
,
1992
.
[7]
Sukhan Lee,et al.
A Gaussian potential function network with hierarchically self-organizing learning
,
1991,
Neural Networks.