Optimal Genetic Representation of Complete Strictly-Layered Feedforward Neural Networks

The automatic evolution of neural networks is both an attractive and a rewarding task. The connectivity matrix is the most common way of directly encoding a neural network for the purpose of genetic optimization. However, this representation presents several disadvantages mostly stemming from its inherent redundancy and its lack of robustness. We propose a novel representation scheme for encoding complete strictly-layered feedforward neural networks and prove that it is optimal in the sense that it utilizes the minimum possible number of bits. We argue that this scheme has a number of important advantages over the direct encoding of the connectivity matrix. It does not suffer from the curse of dimensionality, it allows only legal networks to be represented which relieves the genetic algorithm from a number of checking and rejections, and the mapping from the genotypes to phenotypes is one-to-one. Additionally, the resulting networks have a simpler structure assuring an easier learning phase.

[1]  D. R. McGregor,et al.  Designing application-specific neural networks using the structured genetic algorithm , 1992, [Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks.

[2]  Xin Yao,et al.  Evolving artificial neural networks , 1999, Proc. IEEE.

[3]  Vittorio Maniezzo,et al.  Genetic evolution of the topology and weight distribution of neural networks , 1994, IEEE Trans. Neural Networks.

[4]  Martin Mandischer,et al.  Representation and Evolution of Neural Networks , 1993 .