This paper describes aspects of our ongoing work in evolving recurrent dynamical arti cial neural networks which act as sensory motor controllers generating adaptive behaviour in arti cial agents We start with a discussion of the rationale for our approach Our approach involves the use of recurrent networks of arti cial neurons with rich dynamics resilience to noise both internal and external and separate excitation and inhibition channels The networks allow arti cial agents simulated or robotic to exhibit adaptive behaviour The complexity of designing networks built from such units leads us to use our own extended form of genetic algorithm which allows for incremental automatic evolution of controller networks Finally we review some of our recent results applying our methods to work with simple visually guided robots The genetic algorithm generates useful network architectures from an initial set of randomly connected networks During evolution uniform noise was added to the activation of each neuron After evolution we studied two evolved networks to see how their performance varied when the noise range was altered Signi cantly we discovered that when the noise was eliminated the performance of the networks degraded the networks use noise to operate e ciently Introduction and Rationale Increasingly practitioners of arti cial neural network research are realising that both the complexity of model neurons and also the styles of network architecture need to be extended beyond those employed in the much cited work of the early s Certainly models such as Hop eld networks or back propagating multi layer perceptrons played an important historical role in making parallel distributed processing an acceptable paradigm of study but if we are to succeed in either understanding biological nervous systems or in building arti cial neural networks which exhibit intelligent behaviour it is likely that we will have to move to more complex models But what form should this complexity take The notion of complexity is often highly subjective and hence problematic We should de nitely avoid introducing unnecessary complications but more importantly we should not be deceived by our own simpli ca tions In arti cial neural network ann modelling simpli cations are made for various reasons Often there are issues of mathematical tractability certain model neurons or network architectures are easier to formally analyse than others In other cases the ease with which the models can be simulated or built in available hardware is an important factor and appropriate simpli cations are made In either case it is important to note that the simpli cation is made for our convenience the ann is easier to construct or understand The problem with this approach is that in using simpli ed models we may actually be making life harder for ourselves as scientists because the tasks we try to make our models perform may by their very nature require greater complexity than is possible without using clever trick techniques or large and unwieldy modular assemblies of simple networks There are two simpli cations which are very common in ann models most models in the literature have very simple or non existent dynamics and arbitrary connectivity is often avoided It is manifestly clear that networks with many feedback connections and delays between units are much more challenging to either analyse simulate or design than are networks such as the common three layer back propagation network Yet for many interesting and important problems feedback and intrinsic dynamics are almost de nitely what is required Furthermore there is ample evidence in the neuroscience literature from most branches of the animal kingdom that biological neural networks exhibit rich dynamical behaviour and exploit feed back connections to great e ect Additionally many ann s are developed purely to transform between representations or encodings which have been formulated by their designers Such networks may be worthwhile engineering artefacts performing useful computations but it is important to remember that the primary evolutionary pressure on the development of biological nervous systems which we seek to understand or draw inspiration from was whether a particular nervous system helped an animal survive in environments which were dynamic uncertain and often hostile That is to say nervous systems evolved where they generated adaptive behaviours i e behaviours which are likely to increase the chances that the individual animal survives to reproduce We in common with a growing number of other researchers believe that the generation of adaptive behaviours should form the primary focus for research into cognitive systems and that issues of purely transforming between representations or encodings are at best secondary It is the above factors that have in uenced our recent work discussed in the remainder of this paper We have created ann s which generate adaptive behaviours in arti cial animals i e robotic or simulated agents Our agents have tactile sensors and minimal visual systems two oriented photoreceptors The ann s use highly recurrent networks of arti cial neurons called units with propagation delays as signals pass across links between units The units have separate excitation and inhibition channels and operate in the presence of noise introduced both internally i e within each unit and also externally i e in sensory motor transduction The transfer functions for excitation and inhibition in each unit are nonlinear with discontinuities in the rst derivative Naturally either analysing or designing networks composed of such units is a chal lenging and di cult task Nevertheless we believe that units of the sort used in our work are closer to the minimum complexity acceptable for generating adaptive behaviours than are the simpler units of prior work For this reason the problems of design and analysis have to be tackled rather than avoided by introducing simpli cations Our approach has been to as far as is possible automate the design of the networks by employing our own extended form of genetic algorithm known as saga Whereas most genetic algorithms are essentially performing optimisation in a xed parameter space saga allows for the dimensionality of the parameter space to be under evolutionary control by employing variable length genotypes In terms of the networks this means we are able to start with a population of agents each of which has a minimal number of units extra units may be introduced by mutation and will only be retained if they increase the evolutionary success of the mutated agent our automatic network generation is truly incremental The rest of this paper discusses the neuron model and network simulations in more detail Following this details of the how the networks are encoded as genes suitable for use with saga are given Next we discuss the adaptive behaviour evolved in our simulated agents and present brief analysis of how the performance of the nal evolved networks alters as the internal noise level is varied For further details of our rationale see and for full details of the visual sensing employed see The Model Networks Because our networks are recurrent there is no clear divide between di erent layers c f input hidden and output layers found in back propagation networks Nevertheless for the purposes of generating adaptive behaviour it is necessary to designate some units as receiving input from sensors and others as producing outputs to actuators such as motors As is discussed in this designation may be distorted by the evolutionary processes The remainder of this section discusses details of the neuron model and how the networks architectures are encoded as genes which can be operated on by the saga genetic algorithm The Neuron Model The neuron model we have employed in our work to date has separate channels for excitation and inhibition Values propagate along links between units and are all real numbers in the range All links are subject to a delay t Figure shows a schematic block diagram of the operations within a single model neuron Unusually the inhibition channels operate as a veto or grounding mechanism if a unit receives any inhibitory input its excitatory output is reduced to zero but it can still inhibit other units Excitatory input from sensors or other units is summed if this sum exceeds a speci ed veto threshold tv the unit produces an inhibitory output Independently the sum of excitatory inputs has uniform noise distribution n n R added internally and is then passed through an excitation transfer function the result of which forms the excitatory output for that unit so long as the unit has not been inhibited More formally the excitation transfer function T takes the form T x if x tu if x tl x tu tl tl tu tl otherwise where tl and tu are lower and upper threshold levels The veto output function U takes
[1]
S. Laughlin,et al.
Computational neuroethology: a provisional manifesto
,
1991
.
[2]
SAGAInman HarveyCSRP,et al.
Species Adaptation Genetic Algorithms: A Basis for a Continuing SAGA
,
1992
.
[3]
Randall D. Beer,et al.
Evolving Dynamical Neural Networks for Adaptive Behavior
,
1992,
Adapt. Behav..
[4]
P. Husbands,et al.
Analysis of Evolved Sensory-motor Controllers Analysis of Evolved Sensory-motor Controllers
,
1992
.
[5]
Inman Harvey,et al.
Issues in evolutionary robotics
,
1993
.
[6]
Inman Harvey,et al.
Analysing recurrent dynamical networks evolved for robot control
,
1993
.
[7]
Inman Harvey,et al.
Evolving visually guided robots
,
1993
.
[8]
Inman Harvey,et al.
Evolutionary robotics and SAGA: The case for hill crawling and tournament selection
,
1994
.