Perpetuating evolutionary emergence is the key to artificially evolving increasingly complex systems. In order to generate complex entities with adaptive behaviors beyond our manual design capability, longterm incremental evolution with continuing emergence is called for. Purely artificial selection models, such as traditional genetic algorithms, are argued to be fundamentally inadequate for this calling and existing natural selection systems are evaluated. Thus some requirements for perpetuating evolutionary emergence are revealed. A new environment containing simple virtual autonomous organisms has been created to satisfy these requirements. Resulting evolutionary emergent behaviors are reported alongside of their neural correlates. In one example, the collective behavior of one species clearly provides a selective force which is overcome by another species, demonstrating the perpetuation of evolutionary emergence via naturally arising coevolution. 1. Evolutionary emergence Emergence is related to qualitatively novel structures and behaviors which are not reducible to those hierarchically below them. It poses an attractive methodology for tackling Descartes’ Dictum: “how can a designer build a device which outperforms the designer’s specifications?” (Cariani, 1991, page 776). Most importantly, it is necessary for the generation of complex entities with behaviors beyond our manual design capability. Cariani identified the three current tracts of thought on emergence, calling them “computational”, “thermodynamic” and “relative to a model” (Cariani, 1991). Computational emergence is related to the manifestation of new global forms, such as flocking behavior and chaos, from local interactions. Thermodynamic emergence is concerned with issues such as the origins of life, where order emerges from noise. The emergence relative to a model concept deals with situations where observers need to change their model in order to keep up with a system’s behavior. This is close to Steels’ concept of emergence, which refers to ongoing processes which produce results invoking vocabulary not previously involved in the description of the system’s inner components – “new descriptive categories” (Steels, 1994, section 4.1). Evolutionary emergence falls into the ‘emergence relative to a model’ category. Consider a virtual world of organisms that can move, reproduce and kill according to rules sensitive to the presence of other organisms, evolving under natural selection. Should flocking manifest itself in this system, we could classify it as emergent in two senses: firstly in the ‘computational’ sense from the interaction of local rules, flocking being a collective behavior, and secondly in the ‘relative to a model’ sense from the evolution, the behavior being novel to the system. While the first is also relevant to our goal, in that complex adaptive systems will involve such emergence, the second is the key to understanding evolutionary emergence. Harvey’s Species Adaptation Genetic Algorithm (SAGA) theory (Harvey, 1992) provides a framework for incremental evolution, necessary for evolutionary emergence. In this paradigm a population, with possibly just a few tens of members, evolves for many thousands of generations, with gradual changes in genotype information content. Increases in complexity must therefore result from evolution itself. This is in contrast to the common use of the Genetic Programming (GP) paradigm, where a population of millions may be evolved for less than a hundred generations (Harvey, 1997, section 5). In the GP case, recombination effectively mixes the random initial population, exhausting variation in few generations. Because (genetic codings of) computer programs result in rugged fitness landscapes, there can be little further evolution of this converged population. Here we see one of the requirements of SAGA: a smooth fitness landscape. Having specified what is meant by evolutionary emergence, we will now explore the two types of selection which might be used to bring evolutionary emergence about. Packard referred to these as “extrinsic adaptation, where evolution is governed by a specified fitness function, and intrinsic adaptation, where evolution occurs “automatically” as a result of the dynamics of a system caused by the evolution of many interacting subsystems” (Packard, 1989, abstract). We will refer to them as artificial and natural selection respectively, because the first involves the imposition of an artifice crafted for some cause external to a system beneath it while the second relies solely on the innate dynamics of a system. 2. Artificial selection Within the artificial evolution field, variants of the optimization paradigm have proven fruitful. Even where the concepts of SAGA theory are dominant, practice still holds to the use of fitness functions. But as the complexity of behaviors attempted increases, flaws in the artificial selection approach are appearing. Zaera, Cliff and Bruten’s failed attempts at evolving schooling behavior in artificial ‘fish’ (Zaera et al., 1996) provide an account of the difficulties faced. An extract from the abstract of their paper still yields an excellent summary of the state of artificial selection work within the field: “The problem appears to be due to the difficulty of formulating an evaluation function which captures what schooling is. We argue that formulating an effective fitness evaluation function for use in evolving controllers can be at least as difficult as hand-crafting an effective controller design. Although our paper concentrates on schooling, we believe that this is likely to be a general issue, and is a serious problem which can be expected to be experienced over a variety of problem domains.” Zaera et al. considered possible reasons for their failure. The argument which most convinced them was that real schooling arises through complex interactions, and that their simulations lacked sufficient complexity (Zaera et al., 1996, section 5). They cited two promising works: Reynolds’ evolution of coordinated group motion in ‘prey’ animats pursued by a hard-wired ‘predator’ (Reynolds, 1992), and Rucker’s ‘ecosystem’ model (Rucker, 1993) in which Boid-like animat controllers (or rather their parameters) were evolved. Both of these are moves towards more intrinsic, automatic evolution. The use of coevolutionary models is fast becoming a dominant approach in the adaptive behavior field. This is essentially a response to the problems encountered when trying to use artificial selection to evolve complex behaviors. However, artificial selection has kept its hold so far – most systems still use fitness functions. The reasoning given for imposing coevolution is often that it helps in overcoming problems arising from the use of static fitness landscapes. From the discussion so far, one might assume our argument to be that evolutionary emergence is not possible in a system using artificial selection. This is not quite so, although we do argue that artificial selection is neither sufficient nor necessary. In the context of evolutionary emergence, any artificial selection used constitutes just one of the parts of a system. Artificial selection can only select for that which it is specified to. Therefore anything that emerges during evolution must be due to another aspect of selection, which must in turn be due to the innate dynamics of the system – natural selection. 3. Natural selection As noted in section 1. , genetic codings of computer programs result in rugged fitness landscapes, making them unsuitable for incremental evolution. However, most natural selection work has been program code evolution, following the initial success of ‘Tierra’ (Ray, 1991). 3.1 Natural selection of program code Tierra is a system of self-replicating machine code programs, initialized as a single manually designed selfreplicating program. To make evolution possible, random bit-flipping was imposed on the memory. A degree of artificial selection was imposed by the system deleting the oldest programs in order to free memory, with an added bias against programs that generated error conditions. Tierra was implemented as a virtual computer, allowing Ray to design a machine language with some properties suiting it to evolution. One aspect of this language was that it contained no numeric constants (such as 13). Thus direct memory addressing was not possible. Instead, the manually designed program used consecutive NOP (No-OPeration) instructions which acted as templates that could be found by certain machine code instructions. This ‘addressing by templates’ is how the program determined the points at which to begin and end copying. Another aspect of the system was that computational errors were introduced at random. Such errors could lead to genetic changes by affecting replication. When Tierra was run, various classes of programs evolved. ‘Parasites’ had shed almost half of their code; they replicated by executing the copy loop from neighboring organisms, which could easily be found by template matching instructions as before. Because the parasites depended on their ‘hosts’, they could not displace them and the host and parasite populations entered into Lotka-Volterra population cycles. Ray reported that coevolution occurred as the hosts became immune to the parasites, which overcame these defenses, and so on. ‘Hyper-parasite’ hosts emerged containing instructions that caused a parasite to copy the host rather than the parasite; this could lead to the rapid elimination of parasites. Ray also reported cooperation (symbiosis) in replication followed by ‘cheaters’ (social parasites) which took advantage of the cooperators. The above are examples of ecological adaptations. Another class of adaptations found was “optimizations”. For example, non-parasitic replicators almost a quarter the length of the initial replicator were found, as were programs with ‘unrolled’ copy loops which copi
[1]
L. Yaeger.
Computational Genetics, Physiology, Metabolism, Neural Systems, Learning, Vision, and Behavior or PolyWorld: Life in a New Context
,
1997
.
[2]
T. Ray.
Evolution , Ecology and Optimization of Digital Organisms
,
1992
.
[3]
Stewart W. Wilson,et al.
Not) Evolving Collective Behaviours in Synthetic Fish
,
1996
.
[4]
Rudy Rucker,et al.
Artificial Life Lab
,
1993
.
[5]
Craig W. Reynolds.
An evolved, vision-based behavioral model of coordinated group motion
,
1993
.
[6]
Alastair Channon,et al.
The Evolutionary Emergence route to Artificial Intelligence
,
1996
.
[7]
Stewart W. Wilson,et al.
Not) Evolving Collective Behaviours in Synthetic Fish
,
1996
.
[8]
Thomas S. Ray,et al.
An Approach to the Synthesis of Life
,
1991
.
[9]
R. I. DamperImage.
Evolving Novel Behaviors via Natural Selection
,
1998
.
[10]
Luc Steels,et al.
The Artificial Life Roots of Artificial Intelligence
,
1994,
Artif. Life.
[11]
SAGAInman HarveyCSRP.
Species Adaptation Genetic Algorithms: A Basis for a Continuing SAGA
,
1992
.
[12]
Inman Harvey,et al.
Incremental evolution of neural network architectures for adaptive behavior
,
1993,
ESANN.
[13]
Inman Harvey.
Cognition is Not Computation; Evolution is Not Optimisation
,
1997,
ICANN.