Soft Security: Isolating Unreliable Agents from Society

This paper introduces a multi-agent belief revision algorithm that utilizes knowledge about reliability or trustworthiness of information sources to evaluate incoming information and the sources providing that information. It also allows an agent to learn the trustworthiness of other agents using (1) dissimilarity measures (measures that show how much incorrect information from a particular information source) calculated from the proposed belief revision processes (Direct Trust Revision) and/or (2) communicated trust information from other agents (Recommended Trust Revision). A set of experiments are performed to validate and measure the performance of the proposed Trust Revision approaches. The performance (frequency response and correctness) of the proposed algorithm is analyzed in terms of delay time (the time required for the step response of an agent's belief state to reach 50 percent of the ground truth value), maximum overshoot (the largest deviation of the belief value over the ground truth value during the transient state), and steady-state error (deviation of the belief value after the transient state). The results show a design trade off in better responsiveness to system configuration or environmental changes versus resilience to noise. An agent designer may either (1) select one of the Trust Revision algorithms proposed or (2) use both of them to achieve better performance at the cost of system resource such as computation power and communication bandwidth.