Growing adaptive neural networks with graph grammars

This paper describes how graph grammars may be used to grow neural networks. The grammar facilitates a very compact and declarative description of every aspect of a neural architecture; this is important from a software/neural engineering point of view, since the descriptions are much easier to write and maintain than programs written in a high-level language , such as C++, and do not require programming ability. The output of the growth process is a neural network that can be transformed into a Postscript representation for display purposes, or simulated using a separate neural network simulation program, or mapped directly into hardware in some cases. In this approach, there is no separate learning algorithm; learning proceeds (if at all) as an intrinsic part of the network behaviour. This has interesting application in the evolution of neural nets, since now it is possible to evolve all aspects of a network (including the learningàlgorithm') within a single uniied paradigm. As an example, a grammar is given for growing a multi-layer perceptron with active weights that has the error back-propagation learning algorithm embedded in its structure.

[1]  P. C. Treleaven,et al.  Neural Network Programming Environments , 1992 .

[2]  Simon M. Lucas,et al.  Syntactic Neural Networks , 1990 .

[3]  Simon Mark Lucas Connectionist architectures for syntactic pattern recognition , 1991 .

[4]  Hiroaki Kitano,et al.  Designing Neural Networks Using Genetic Algorithms with Graph Generation System , 1990, Complex Syst..

[5]  Byoung-Tak Zhang,et al.  Synthesis of sigma-pi neural networks by the breeder genetic programming , 1994, Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence.