Trust and Reputation Mechanisms for Multi-agent Robotic Systems

In this paper we analyze the functioning of multi-agent robotic systems with decentralized control in conditions of destructive information influences from robots-saboteurs. We considered a type of hidden attacks using interception of messages, formation and transmission of misinformation to a group of robots, and also realizing other actions which have no visible signs of invasion into a group of robots. We analyze existing models of information security of the multi-agent information system based on a measure of trust, calculated in the course of interaction of agents. We suggest a mechanism of information security in which robots-agents produce levels of trust to each other on the basis of the situation analysis developing on a certain step of an iterative algorithm with the use of onboard sensor devices. For improving the metric of likeness of objects relating to one category (“saboteur” or “legitimate agent”) we suggest an algorithm to calculate reputation of agents as a measure of the public opinion created in time about qualities of robots of the category “saboteur” in a group of legitimate robots-agents. It is shown that inter-cluster distance can serve as a metric of quality of trust models in multi-agent systems. We give an example showing the use of the developed mechanism for detection of saboteurs in different situations in using the basic algorithm of distribution of targets in a group of robots.

[1]  Jinyuan You,et al.  POM-a mobile agent security model against malicious hosts , 2000, Proceedings Fourth International Conference/Exhibition on High Performance Computing in the Asia-Pacific Region.

[2]  Arkady B. Zaslavsky,et al.  A Buddy Model of Security for Mobile Agent Communities Operating in Pervasive Scenarios , 2004, ACSW.

[3]  Michael Wooldridge,et al.  Introduction to multiagent systems , 2001 .

[4]  J. C. Byington,et al.  Mobile agents and security , 1998, IEEE Commun. Mag..

[5]  Felix A. Fischer,et al.  Cooperative Information Agents XI , 2008 .

[6]  Michael Rovatsos,et al.  Using trust for detecting deceitful agents in artificial societies , 2000, Appl. Artif. Intell..

[7]  Allan Tomlinson,et al.  Threats to the Swarm: Security Considerations for Swarm Robotics , 2009 .

[8]  Igor A. Zikratov,et al.  TRUST MODEL FOR INFORMATION SECURITY OF MULTI-AGENT ROBOTIC SYSTEMS WITH A DECENTRALIZED MANAGEMENT , 2014 .

[9]  Óscar García-Morchón,et al.  Cooperative security in distributed networks , 2013, Comput. Commun..

[10]  Ali A. Ghorbani,et al.  Reputation Formalization for an Information–Sharing Multi–Agent System , 2002, Comput. Intell..

[11]  Maria Indrawan,et al.  Countering security vulnerabilities using a shared security Buddy model schema in mobile agent communities , 2004 .

[12]  Sarvapali D. Ramchurn,et al.  Trust in multi-agent systems , 2004, The Knowledge Engineering Review.

[13]  James A. Hendler,et al.  Trust Networks on the Semantic Web , 2003, WWW.

[14]  Christian F. Tschudin,et al.  Protecting Mobile Agents Against Malicious Hosts , 1998, Mobile Agents and Security.