Traditionally, a verification task is considered solved as soon as a property violation or a correctness proof is found. In practice, this is where the actual work starts: Is it just a false alarm? Is the error reproducible? Can the error report later be re-used for bug fixing or regression testing? The advent of exchangeable witnesses is a paradigm shift in verification, from simple answers true and false towards qualitatively more valuable information about the reason for the property violation. This paper explains a convenient web-based toolchain that can be used to answer the above questions. We consider as example application the verification of C programs. Our first component collects witnesses and stores them for later re-use; for example, if the bug is fixed, the witness can be tried once again and should now be rejected, or, if the bug was not scheduled for fixing, the database can later provide the witnesses in case an engineer wants to start fixing the bug. Our second component is a web service that takes as input a witness for the property violation and (re-)validates it, i.e., it re-plays the witness on the system in order to re-explore the state-space in question. The third component is a web service that continues from the second step by offering an interactive visualization that interconnects the error path, the system’s sources, the values on the path (test vectors), and the reachability graph. We evaluated the feasibility of our approach on a large benchmark of verification tasks.
[1]
Alex Groce,et al.
Understanding Counterexamples with explain
,
2004,
CAV.
[2]
Lucas C. Cordeiro,et al.
Understanding Programming Bugs in ANSI-C Software Using Bounded Model Checking Counter-Examples
,
2012,
IFM.
[3]
Dirk Beyer,et al.
Reliable and Reproducible Competition Results with BenchExec and Witnesses (Report on SV-COMP 2016)
,
2016,
TACAS.
[4]
Husain Aljazzar,et al.
Debugging of Dependability Models Using Interactive Visualization of Counterexamples
,
2008,
2008 Fifth International Conference on Quantitative Evaluation of Systems.
[5]
Dirk Beyer,et al.
Witness validation and stepwise testification across software verifiers
,
2015,
Software Engineering.
[6]
Joseph Sifakis,et al.
Model checking
,
1996,
Handbook of Automated Reasoning.
[7]
Dirk Beyer,et al.
Reuse of Verification Results - Conditional Model Checking, Precision Reuse, and Verification Witnesses
,
2013,
SPIN.
[8]
Stephan Merz,et al.
Model Checking
,
2000
.
[9]
Dirk Beyer,et al.
CPAchecker: A Tool for Configurable Software Verification
,
2009,
CAV.
[10]
Thomas A. Henzinger,et al.
The Blast Query Language for Software Verification
,
2004,
SAS.
[11]
Helmut Veith,et al.
Counterexample-guided abstraction refinement for symbolic model checking
,
2003,
JACM.