Engineering trust alignment: theory and practice CSIC-IIIA Technical Report: TR-IIIA-2010-02

In open multi-agent systems trust models are an important tool for agents to achieve eective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic dierences between trust evaluations in the dierent agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally we demonstrate this alignment process in practice, using a rst order regression algorithm, to learn an alignment and test it in an example scenario.

[1]  Jon Barwise,et al.  Information Flow: The Logic of Distributed Systems , 1997 .

[2]  Shan-Hwei Nienhuys-Cheng,et al.  Foundations of Inductive Logic Programming , 1997, Lecture Notes in Computer Science.

[3]  Jaime Simão Sichman,et al.  Effects of expressiveness and heterogeneity of reputation models in the ART testbed : Some preliminary experiments using the SOARI architecture , 2009 .

[4]  Luc De Raedt,et al.  Logical and relational learning , 2008, Cognitive Technologies.

[5]  Jordi Sabater-Mir,et al.  Trust in Agent Societies, 11th International Workshop, TRUST 2008, Estoril, Portugal, May 12-13, 2008. Revised Selected and Invited Papers , 2008, AAMAS-TRUST.

[6]  Bart Demoen,et al.  Improving the Efficiency of Inductive Logic Programming Through the Use of Query Packs , 2011, J. Artif. Intell. Res..

[7]  Audun Jøsang,et al.  A survey of trust and reputation systems for online service provision , 2007, Decis. Support Syst..

[8]  Jaime Simão Sichman,et al.  Towards a functional ontology of reputation , 2005, AAMAS '05.

[9]  Jordi Sabater-Mir,et al.  Repage: REPutation and ImAGE Among Limited Autonomous Partners , 2006, J. Artif. Soc. Soc. Simul..

[10]  Ronald D. Snee,et al.  Validation of Regression Models: Methods and Examples , 1977 .

[11]  Jordi Sabater-Mir,et al.  Engineering Trust Alignment: a First Approach , 2010 .

[12]  Jie Zhang,et al.  POYRAZ: CONTEXT‐AWARE SERVICE SELECTION UNDER DECEPTION , 2009, Comput. Intell..

[13]  Stephen Hailes,et al.  Supporting trust in virtual communities , 2000, Proceedings of the 33rd Annual Hawaii International Conference on System Sciences.

[14]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[15]  Emiliano Lorini,et al.  From cognitive trust theories to computational trust ? , 2009 .

[16]  Pascal Poupart,et al.  Bayesian Reputation Modeling in E-Marketplaces Sensitive to Subjectivity, Deception and Change , 2006, AAAI.

[17]  Marco Schorlemmer,et al.  I-SSA: Interaction-Situated Semantic Alignment , 2008, OTM Conferences.

[18]  Andrew Koster,et al.  Why does trust need aligning , 2010 .

[19]  Nicholas R. Jennings,et al.  TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources , 2006, Autonomous Agents and Multi-Agent Systems.

[20]  Jordi Sabater-Mir,et al.  Arguing about Reputation: The LRep Language , 2008, ESAW.

[21]  Gordon Plotkin,et al.  A Note on Inductive Generalization , 2008 .

[22]  Jordi Sabater-Mir,et al.  Inductively generated trust alignments based on shared interactions , 2010, AAMAS.

[23]  Yannis Kalfoglou,et al.  A Formal Foundation for Ontology-Alignment Interaction Models , 2007, Int. J. Semantic Web Inf. Syst..