Adapting Reinforcement Learning for Trust: Effective Modeling in Dynamic Environments

In open multiagent systems, agents need to model their environments in order to identify trustworthy agents. Models of the environment should be accurate so that decisions about whom to interact with can be done soundly. Traditional trust models are based on modeling specific properties of agents, such as their expertise or reliability. Building those models requires too many prior interactions to be accurate. This paper proposes an approach that is based on keeping track of outcomes of agent's actions towards others rather than modeling other agents' performances explicitly. Contrary to existing modeling approaches that require domain knowledge to build models, our proposed approach can be effectively realized in multiagent systems when the agent's actions are clearly identified. Comparisons with other modeling approaches in various environments reveal that our proposed approach can create more precise models in short time and can adjust its behavior quickly when other agents' behaviors change.