Learning whom to trust: using graphical models for learning about information providers

In many multi-agent systems, information is distributed among potential providers that vary in their capability to report useful information and in the extent to which their reports may be biased. This abstract shows that graphical models can be used to simultaneously learn complex reporting policies that agents use and learn their capabilities; weigh the benefits of different combinations of information providers; and optimally choose a combination of information providers to minimize error. An agent's policy refers to the way in which the agent reports information. We show that these models are able to capture agents that vary in their capabilities and reporting policies. Agents using these graphical models outperformed the top contestants of the recent international Agent Reputation and Trust testbed competition. Further experiments show that graphical models can accurately model agents that use complex policies to decide how to report information, and determine how to combine these reports to minimize error.