An Architecture for Behavior Coordination Learning

This paper describes a neural architecture for learning coordination of different behaviors in a situated agent. Behavior-oriented approaches define the control of an agent directly in terms of its tasks. A key challenge is how to manage the agent’s ongoing tasks so that action conflict is minimized and desired levels of compliance with overall goals are achieved. We present mechanisms for adapting the coordination strategy through shortand long-term adaptive inhibition and timevarying performance feedback. Finally, we present preliminary experimental results for a simulated robot which demonstrate the effectiveness of this method.