Assessment of stability of algorithms based on trust and reputation model

Swarm robotic systems are actively developed and widely studied in the world scientific practice. It is expected that multiagent, distributed approach to creating artificial intelligence of autonomous systems will allow to solve a great number of complex problems in the areas of environmental protection, medicine, cleaning, patrolling, etc. Thereby the research of these systems (design and testing) in terms of information security becomes relevant. The key to a wide practical use of swarm robotic systems is the development of specific guidelines and algorithms for the organization of group actions. This research proposes the use of trust and reputation model for information security of swarm system. Swarm's agents generate trust levels to each other basing on the analysis of situation on the kth iteration step of the algorithm and using their sensor devices. On the calculated trust levels the collective recognition of saboteurs is carried out. To perform experiment software simulator was designed. It allows to vaiy the basic parameters of swarm robotic system (number of agents, number of targets, range of communication, number of saboteurs).

[1]  Andrei V. Gurtov,et al.  Trust and Reputation Mechanisms for Multi-agent Robotic Systems , 2014, NEW2AN.

[2]  Sandip Sen,et al.  Robustness of reputation-based trust: boolean case , 2002, AAMAS '02.

[3]  I. A. Zikratov,et al.  Securing swarm intellect robots with a police office model , 2014, 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT).

[4]  Mark Witkowski,et al.  Experiments in Building Experiential Trust in a Society of Objective-Trust Based Agents , 2000, Trust in Cyber-societies.

[5]  Olivier Festor,et al.  A Distributed and Adaptive Revocation Mechanism for P2P Networks , 2008, Seventh International Conference on Networking (icn 2008).

[6]  Óscar García-Morchón,et al.  Cooperative security in distributed networks , 2013, Comput. Commun..

[7]  Ali A. Ghorbani,et al.  Reputation Formalization for an Information–Sharing Multi–Agent System , 2002, Comput. Intell..

[8]  Anand R. Tripathi,et al.  Security in the Ajanta mobile agent system , 2001, Softw. Pract. Exp..

[9]  Maria Indrawan,et al.  Countering security vulnerabilities using a shared security Buddy model schema in mobile agent communities , 2004 .

[10]  Sandip Sen,et al.  The evolution and stability of cooperative traits , 2002, AAMAS '02.

[11]  Michael Rovatsos,et al.  Using trust for detecting deceitful agents in artificial societies , 2000, Appl. Artif. Intell..

[12]  Giorgos Zacharia,et al.  Trust management through reputation mechanisms , 2000, Appl. Artif. Intell..

[13]  Christian F. Tschudin,et al.  Protecting Mobile Agents Against Malicious Hosts , 1998, Mobile Agents and Security.

[14]  Rino Falcone,et al.  Principles of trust for MAS: cognitive anatomy, social importance, and quantification , 1998, Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160).

[15]  Allan Tomlinson,et al.  Survey on Security Challenges for Swarm Robotics , 2009, 2009 Fifth International Conference on Autonomic and Autonomous Systems.

[16]  Igor V. Kotenko,et al.  Multi-agent technologies for computer network security: attack simulation, intrusion detection and intrusion detection learning , 2003, Comput. Syst. Sci. Eng..

[17]  Alistair A. Young,et al.  Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , 2017, MICCAI 2017.