It is well known that individual learning can speed up artificial evolution enormously. However both supervised learning and reinforcement learning require specific learning goals which usually are not available or difficult to find. We introduce a new principle – homeokinesis –which is completely unspecific and yet induces specific, seemingly goal–oriented behaviors of an agent in a complex external world. The principle is based on the assumption that the agent is equipped with an adaptive model of its behavior. A learning signal for both the model and the controller is derived from the misfit between the real behavior of the agent in the world and that predicted by the model. If the structural complexity of the model is chosen adequately, this misfit is minimized if the agent exhibits a smooth controlled behavior. The principle is explicated by two examples. We moreover discuss how functional modularization emerges in a natural way in a structured system from a mechanism of competition for the best internal representation.
[1]
G. Zajicek,et al.
The Wisdom of the Body
,
1934,
Nature.
[2]
V. Braitenberg.
Vehicles, Experiments in Synthetic Psychology
,
1984
.
[3]
Luc Steels,et al.
Emergent functionality in robotic agents through on-line evolution
,
1994
.
[4]
Stefano Nolfi,et al.
Evolving non-Trivial Behaviors on Real Robots: an Autonomous Robot that Picks up Objects
,
1995,
AI*IA.
[5]
Dave Cliff,et al.
Challenges in evolving controllers for physical robots
,
1996,
Robotics Auton. Syst..
[6]
Stefano Nolfi,et al.
God Save the Red Queen! Competition in Co-Evolutionary Robotics
,
1997
.