Explaining robot actions

To increase human trust in robots, we have developed a system that provides insight into robotic behaviors by enabling a robot to answer questions people pose about its actions (e.g., Q: “Why did you turn left there?” A: “I detected a person at the end of the hallway.”). Our focus is on generation of this explanation in human-understandable terms despite the mathematical, robot-specific representation and planning system used by the robot to make its decisions and execute its actions. We present our work to date on this topic, including system design and experiments, and discuss areas for future work.