This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms. Direct Encoding encodes the weights for an a priori fixed neural network architecture. Cellular Encoding encodes both weights and the architecture of the neural network. In previous studies, Direct Encoding and Cellular Encoding have been used to create neural networks for balancing 1 and 2 poles attached to a cart on a fixed track. The poles are balanced by a controller that pushes the cart to the left or the right. In some cases velocity information about the pole and cart is provided as an input; in other cases the network must learn to balance a single pole without velocity information. A careful study of the behavior of these systems suggests that it is possible to balance a single pole with velocity information as an input and without learning to compute the velocity. A new fitness function is introduced that forces the neural network to compute the velocity. By using this new fitness function and tuning the syntactic constraints used with cellular encoding, we achieve a tenfold speedup over our previous study and solve a more difficult problem: balancing two poles when no information about the velocity is provided as input.
[1]
Heinz Mühlenbein,et al.
The parallel genetic algorithm as function optimizer
,
1991,
Parallel Comput..
[2]
David R. Jefferson,et al.
Selection in Massively Parallel Genetic Algorithms
,
1991,
ICGA.
[3]
L. Darrell Whitley,et al.
Adding Learning to the Cellular Development of Neural Networks: Evolution and the Baldwin Effect
,
1993,
Evolutionary Computation.
[4]
John R. Koza,et al.
Genetic programming - on the programming of computers by means of natural selection
,
1993,
Complex adaptive systems.
[5]
Frédéric Gruau,et al.
Automatic Definition of Modular Neural Networks
,
1994,
Adapt. Behav..
[6]
L. Darrell Whitley,et al.
Cellular Encoding Applied to Neurocontrol
,
1995,
ICGA.