Learning Models from Temporal-Logic Properties via Explanations

Given a model and a property expressed in temporal logic, a model checker normally produces a counterexample in case the model does not satisfy the property. This counterexample is meant to serve as a guide for manually modifying the model so that the new model does satisfy the property. We observe that basing the modification of a model on negative information (why a formula is not true) can have limitations, and we present a method employing positive information instead. Our method incrementally learns a subformula and marks the part of the model that makes the already learned subformula true (i.e. an explanation). Next, our method attempts to learn the rest of the formula without altering the marked part of the model.