Towards Incentive-Compatible Reputation Management

Traditional centralized approaches to security are difficult to apply to large, distributed, multi-agent systems. Developing a notion of trust that is based on the reputation of agents can provide a softer notion of security that is sufficient for many MAS applications. However, designing a reliable and trustworthy reputation mechanism is not a trivial problem. In this paper, we address the issue of incentive-compatibility, i.e. why should agents report reputation information and why should they report it truthfully. By introducing a side-payment scheme organized through a set of broker agents we make it rational for software agents to truthfully share the reputation information they have acquired in their past experience. The theoretical results obtained were verified by a simple simulation. We conclude by making an analysis of the robustness of the system in the presence of an increasing percentage of lying agents.