Design architectures and training of neural networks with a distributed genetic algorithm

Designing and training neural networks using a distributed genetic algorithm reinforced by the perceptron learning rule is shown. The method sets the neural network's architecture and weights for a given task where the network is comprised of binary linear threshold units. The search space is not of all the possible nets, but is specified for the unit. Each individual unit is inspected under the restriction of a feedforward network structure in order to find its optimal set of connections and associated weights, related to the present net state. For the genetic algorithm, an objective function (fitness) is defined. It considers for each unit primarily the overall network error, and, secondarily, the unit's possible connections and weights that are preferable for continuity of the convergence process. The perceptron learning rule is used to search a better unit input connection weights set. Examples are given showing the potential of the proposed approach.<<ETX>>