On Heterogeneous Memory in Hidden-Action Setups: An Agent-Based Approach

We follow the agentization approach and transform the standard-hidden action model introduced by Holmstrom into an agent-based model. Doing so allows us to relax some of the incorporated rather "heroic" assumptions related to the (i) availability of information about the environment and the (ii) principal's and agent's cognitive capabilities (with a particular focus on their memory). In contrast to the standard hidden-action model, the principal and the agent are modeled to learn about the environment over time with varying capabilities to process the learned pieces of information. Moreover, we consider different characteristics of the environment. Our analysis focuses on how close and how fast the incentive scheme, which endogenously emerges from the agent-based model, converges to the second-best solution proposed by the standard hidden-action model. Also, we investigate whether a stable solution can emerge from the agent-based model variant. The results show that in stable environments the emergent result can nearly reach the solution proposed by the standard hidden-action model. Surprisingly, the results indicate that turbulence in the environment leads to stability in earlier time periods.