Comparative Bibliography of Ontogenic Neural Networks

One of the most powerful aspects of neural networks is their ability to adapt to problems by changing their interconnection strengths according to a predetermined learning rule. On the other hand, one of the main drawbacks of neural networks is the lack of knowledge for determining the topology of the network, that is, the number of layers and number of neurons per layer. A special class of neural networks tries to overcome this problem by letting the network also automatically adapt its topology to the problem. These are the so called ontogenic neural networks. Other potential advantages of ontogenic neural networks are improved generalization, implementation optimization (size and/or execution speed), and the avoidance of local minima.

[1]  Yoshio Hirose,et al.  Backpropagation algorithm which varies the number of hidden units , 1989, International 1989 Joint Conference on Neural Networks.

[2]  Babak Hassibi,et al.  Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.

[3]  Masafumi Hagiwara Novel backpropagation algorithm for reduction of hidden units and acceleration of convergence using artificial selection , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[4]  Stephen Jose Hanson,et al.  Meiosis Networks , 1989, NIPS.

[5]  Jean-Pierre Nadal,et al.  Neural trees: a new tool for classification , 1990 .

[6]  B. Fritzke,et al.  FLEXMAP-a neural network for the traveling salesman problem with linear time and space complexity , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[7]  Timur Ash,et al.  Dynamic node creation in backpropagation networks , 1989 .

[8]  Guillaume Deffuant Neural units recruitment algorithm for generation of decision trees , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[9]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[10]  Bernd Fritzke,et al.  Unsupervised clustering with growing cell structures , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[11]  Scott E. Fahlman,et al.  The Recurrent Cascade-Correlation Architecture , 1990, NIPS.

[12]  J. Nadal,et al.  Learning in feedforward layered networks: the tiling algorithm , 1989 .

[13]  Bernd Fritzke,et al.  Let It Grow - Self-Organizing Feature Maps With Problem Dependent Cell Structure , 1991 .

[14]  Mario Marchand,et al.  Learning by Minimizing Resources in Neural Networks , 1989, Complex Syst..

[15]  Ethem Alpaydm Grow-and-Learn: An Incremental Method for Category Learning , 1990 .

[16]  Joachim Diederich,et al.  Connectionist Recruitment Learning , 1988, ECAI.

[17]  Manoel Fernando Tenorio,et al.  Self Organizing Neural Networks for the Identification Problem , 1988, NIPS.

[18]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[19]  M. Golea,et al.  A Growth Algorithm for Neural Network Decision Trees , 1990 .

[20]  Marcus Frean,et al.  The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks , 1990, Neural Computation.

[21]  Hans Henrik Thodberg,et al.  Improving Generalization of Neural Networks Through Pruning , 1991, Int. J. Neural Syst..

[22]  Eric B. Baum,et al.  Constructing Hidden Units Using Examples and Queries , 1990, NIPS.

[23]  Vasant Honavar,et al.  Generative learning structures and processes for generalized connectionist networks , 1993, Inf. Sci..

[24]  Vasant Honavar,et al.  Brain-structured Connectionist Networks that Perceive and Learn , 1989 .

[25]  Visakan Kadirkamanathan,et al.  A Function Estimation Approach to Sequential Learning with Neural Networks , 1993, Neural Computation.

[26]  Huan Liu,et al.  Self-generating neural networks , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[27]  Jean-Pierre Nadal,et al.  Study of a Growth Algorithm for a Feedforward Network , 1989, Int. J. Neural Syst..