Using trust for detecting deceitful agents in artificial societies

Trust is one of the most important concepts guiding decision-making and contracting in human societies. In artificial societies, this concept has been neglected until recently. The inherent benevolence assumption implemented in many multiagent systems can have hazardous consequences when dealing with deceit in open systems. The aim of this paper is to establish a mechanism that helps agents to cope with environments inhabited by both selfish and cooperative entities. This is achieved by enabling agents to evaluate trust in others. A formalization and an algorithm for trust are presented so that agents can autonomously deal with deception and identify trustworthy parties in open systems. The approach is twofold: agents can observe the behavior of others and thus collect information for establishing an initial trust model. In order to adapt quickly to a new or rapidly changing environment, one enables agents to also make use of observations from other agents. The practical relevance of these ideas is demonstrated by means of a direct mapping from a scenario to electronic commerce.

[1]  Reid G. Smith,et al.  The Contract Net Protocol: High-Level Communication and Control in a Distributed Problem Solver , 1980, IEEE Transactions on Computers.

[2]  Edmund H. Durfee,et al.  Mixing and memory: emergent cooperation in an information marketplace , 1998, Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160).

[3]  S. Vajda,et al.  GAMES AND DECISIONS; INTRODUCTION AND CRITICAL SURVEY. , 1958 .

[4]  Diego Gambetta Trust : making and breaking cooperative relations , 1992 .

[5]  Thomas Malsch,et al.  SOCIONICS: Introduction and Potential , 1998, J. Artif. Soc. Soc. Simul..

[6]  W. A. Jansen,et al.  MOBILE AGENTS AND SECURITY , 1999 .

[7]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[8]  Christian F. Tschudin,et al.  Protecting Mobile Agents Against Malicious Hosts , 1998, Mobile Agents and Security.

[9]  David Carmel,et al.  How to explore your opponent's strategy (almost) optimally , 1998, Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160).

[10]  Rafael H. Bordini,et al.  Agents with Moral Sentiments in an Iterated Prisoner's Dilemma Exercise , 1997 .

[11]  Ueli Maurer,et al.  Modelling a Public-Key Infrastructure , 1996, ESORICS.

[12]  Jeffrey S. Rosenschein,et al.  Deals Among Rational Agents , 1985, IJCAI.

[13]  Kerstin Dautenhahn Socially Intelligent Agents , 1998, AI Mag..

[14]  Jürgen Lind,et al.  SIF - the social interaction framework system : description and user's guide to a multi-agent system testbed , 1999 .

[15]  Sandip Sen,et al.  Reciprocity: a foundational principle for promoting cooperative behavior among self-interested agents , 1996 .