Analyzing the impact of human bias on human-agent teams in resource allocation domains

As agent-human teams get increasingly deployed in the real-world, agent designers need to take into account that humans and agents have different abilities to specify preferences. In this paper, we focus on how human biases in specifying preferences for resources impacts the performance of large, heterogeneous teams. In particular, we model the inclination of humans to simplify their preference functions and to exaggerate their utility for desired resources. We then study the effect of these biases on two different problems, which are representative of most resource allocation problems addressed in literature.