A consistency based approach of action model learning in a community of agents

In this paper, we model a community of autonomous agents in which each agent acts in the environment following some relational action model [4] describing the effect to be expected when applying a given action in a given state. At some given moment, the underlying action model is only imperfectly known by agents and may have to be revised according to the unexpected effect of the current action. In a Multi Agent context, this revision process can and should benefit of interactions between the agents. For that purpose, we consider the general multi agent learning protocol SMILE [2] together with the relational action model learner IRALE [5] in order to model the interactions between agents.