The Dynamics of Associative Learning in an Evolved Situated Agent

Artificial agents controlled by dynamic recurrent node networks with fixed weights are evolved to search for food and associate it with one of two different temperatures depending on experience. The task requires either instrumental or classical conditioned responses to be learned. The paper extends previous work in this area by requiring that a situated agent be capable of re-learning during its lifetime. We analyse the best-evolved agent's behaviour and explain in some depth how it arises from the dynamics of the coupled agent-environment system.