From Safety to Guilty & from Liveness to Niceness

Robots are solving challenging tasks that we want them to be able to perform (liveness), but we also do not want them to endanger their surroundings (safety). Formal methods provide ways of proving such correctness properties, but have the habit of only saying “yes” when the answer is “yes” (soundness). More often than not, formal methods say “no”: They find out that the system is neither safe nor live, because there are “unexpected” circumstances in which the robot just cannot do what we expect it to. Inspecting those unexpected circumstances is informative, and identifies constraints on reasonable behavior of the environment. This ultimately leads from safety to the question of who is guilty depending on whose action caused the safety violation. It also leads from liveness to the question of what behavior of the environment is nice enough so that the robot can finish its task. I. FORMAL METHODS FOR ROBOTICS Robots often interact with a dynamically changing environment and in close proximity to humans or critical infrastructure. Thus, safety is key. But we also want a robot to complete some useful task or achieve a particular goal (liveness). Formal verification methods help to exhaustively analyze a robot and its control algorithms for correctness. This paper is based on our experience with formal verification of safety and liveness properties of autonomous robotic ground vehicles [2]. The overall challenge arises, because robots not only execute discrete (control) algorithms, but they also interact with the real world through sensors and actuators. For verification purposes, thus, we need to take the discrete control algorithms and the continuous physical behavior of both our own robot and the environment into account. Hybrid systems are a suitable mathematical model to describe systems with interacting discrete and continuous behavior. We focus on theorem proving for hybrid systems as verification method, and use differential dynamic logic [4] implemented in the KeYmaera prover [5] and the modeling tool Sphinx [3] to illustrate the challenges that arise from analyzing safety and