The cascade correlation is a very flexible, efficient and fast algorithm for supervised learning. It incrementally builds the network by adding hidden units one at a time, until the desired input/output mapping is achieved. It connects all the previously installed units to the new unit being added. Consequently, each new unit in effect adds a new layer and the fan-in of the hidden and output units keeps on increasing as more units get added. The resulting structure could be hard to implement in VLSI, because the connections are irregular and the fan-in is unbounded. Moreover, the depth or the propagation delay through the resulting network is directly proportional to the number of units and can be excessive. We have modified the algorithm to generate networks with restricted fan-in and small depth (propagation delay) by controlling the connectivity. Our results reveal that there is a tradeoff between connectivity and other performance attributes like depth, total number of independent parameters, and learning time.
[1]
Terrence J. Sejnowski,et al.
Parallel Networks that Learn to Pronounce English Text
,
1987,
Complex Syst..
[2]
Terrence J. Sejnowski,et al.
Analysis of hidden units in a layered network trained to classify sonar targets
,
1988,
Neural Networks.
[3]
K. Lang,et al.
Learning to tell two spirals apart
,
1988
.
[4]
Christian Lebiere,et al.
The Cascade-Correlation Learning Architecture
,
1989,
NIPS.
[5]
Dennis Connolly,et al.
Large-scale networks via self-organizing hierarchical networks
,
1991
.
[6]
Krzysztof J. Cios,et al.
A machine learning method for generation of a neural network architecture: a continuous ID3 algorithm
,
1992,
IEEE Trans. Neural Networks.