The Biasing of Action Selection Produces Emergent Human-Robot Interactions in Autonomous Driving

This letter describes a means to produce emergent collaboration between a human driver and an artificial co-driver agent. The work exploits the hypothesis that human-human cooperation emerges from a shared understanding of the given context’s affordances and emulates the same principle: the observation of one agent’s behavior steers another agent’s decision-making by favoring the selection of the goals that would produce the observed activity. Specifically, we describe how to steer the decision-making of a special self-driving agent via weighting the agent’s action selection process with input from a dummy human driving activity. In this way, human input maps onto the safe and affordable actions recognized by the agent. We demonstrate an emergent and efficient driving, collaboration, and rejection of unsafe human requests.