Reliable and Reproducible Competition Results with BenchExec and Witnesses (Report on SV-COMP 2016)

The 5$$^{\text {th}}$$ Competition on Software Verification SV-COMP 2016 continues the tradition of a thorough comparative evaluation of fully-automatic software verifiers. This report presents the results of the competition and includes a special section that describes how SV-COMP ensures that the experiments are reliably executed, precisely measured, and organized such that the results can be reproduced later. SV-COMP uses BenchExec for controlling and measuring the verification runs, and requires violation witnesses in an exchangeable format, whenever a verifier reports that a property is violated. Each witness was validated by two independent and publicly-available witness validators. The tables report the state of the art in software verification in terms of effectiveness and efficiency. The competition used 6i¾?661 verification tasks that each consisted of a C program and a property reachability, memory safety, termination. SV-COMP 2016 had 35i¾?participating verification systems 22i¾?in 2015 from 16i¾?countries.

[1]  Jürgen Giesl,et al.  AProVE: Termination and Memory Safety of C Programs - (Competition Contribution) , 2015, TACAS.

[2]  Jiri Barnat,et al.  LTL Model Checking of LLVM Bitcode with Symbolic Data , 2014, MEMICS.

[3]  Dirk Beyer Software Verification and Verifiable Witnesses - (Report on SV-COMP 2015) , 2015, TACAS.

[4]  Vladimír Still,et al.  DIVINE: Explicit-State LTL Model Checker - (Competition Contribution) , 2016, TACAS.

[5]  Dirk Beyer,et al.  Benchmarking and Resource Measurement , 2015, SPIN.

[6]  Salvatore La Torre,et al.  Bounded Model Checking of Multi-threaded C Programs via Lazy Sequentialization , 2014, CAV.

[7]  Andreas Podelski,et al.  Ultimate Automizer with Two-track Proofs - (Competition Contribution) , 2016, TACAS.

[8]  Dirk Beyer,et al.  Witness validation and stepwise testification across software verifiers , 2015, Software Engineering.

[9]  Jorge A. Navas,et al.  SeaHorn: A Framework for Verifying C Programs (Competition Contribution) , 2015, TACAS.

[10]  Salvatore La Torre,et al.  Unbounded Lazy-CSeq: A Lazy Sequentialization Tool for C Programs with Unbounded Context Switches - (Competition Contribution) , 2015, TACAS.

[11]  Farn Wang,et al.  PAC Learning-Based Verification and Model Synthesis , 2015, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).

[12]  Dirk Beyer,et al.  Status Report on Software Verification - (Competition Summary SV-COMP 2014) , 2014, TACAS.

[13]  Matthew B. Dwyer,et al.  CIVL: Applying a General Concurrency Verification Framework to C/Pthreads Programs (Competition Contribution) , 2016, TACAS.

[14]  Dirk Beyer,et al.  Second Competition on Software Verification - (Summary of SV-COMP 2013) , 2013, TACAS.

[15]  Keijo Heljanko,et al.  LCTD: Tests-Guided Proofs for C Programs on LLVM - (Competition Contribution) , 2016, TACAS.

[16]  Lucas C. Cordeiro,et al.  ESBMC 1.22 - (Competition Contribution) , 2014, TACAS.

[17]  Dirk Beyer,et al.  Refinement Selection , 2015, SPIN.

[18]  Lucas C. Cordeiro,et al.  Hunting Memory Bugs in C Programs with Map2Check - (Competition Contribution) , 2016, TACAS.

[19]  Dirk Beyer,et al.  Boosting k-Induction with Continuously-Refined Invariants , 2015, CAV.

[20]  Daniel Kroening,et al.  2LS for Program Analysis - (Competition Contribution) , 2016, TACAS.

[21]  Vadim S. Mutilin,et al.  Predicate Analysis with BLAST 2.7 - (Competition Contribution) , 2012, TACAS.

[22]  Matthias Dangl,et al.  CPAchecker with Support for Recursive Programs and Floating-Point Arithmetic - (Competition Contribution) , 2015, TACAS.

[23]  Wei Wang,et al.  Cascade - (Competition Contribution) , 2015, TACAS.

[24]  Jan Strejcek,et al.  Symbiotic 3: New Slicer and Error-Witness Generation - (Competition Contribution) , 2016, TACAS.

[25]  Karlheinz Friedberger CPA-BAM: Block-Abstraction Memoization with Value Analysis and Predicate Analysis - (Competition Contribution) , 2016, TACAS.

[26]  Lukás Holík,et al.  Run Forester, Run Backwards! - (Competition Contribution) , 2016, TACAS.

[27]  Pablo Gonzalez-de-Aledo,et al.  FramewORk for Embedded System verification , 2015, TACAS 2015.

[28]  Alfons Laarman,et al.  Vienna Verification Tool: IC3 for Parallel Software - (Competition Contribution) , 2016, TACAS.

[29]  Pablo González de Aledo Marugán,et al.  FramewORk for Embedded System verification - (Competition Contribution) , 2015, TACAS.

[30]  Salvatore La Torre,et al.  MU-CSeq 0.4: Individual Memory Location Unwindings - (Competition Contribution) , 2016, TACAS.

[31]  Lucas C. Cordeiro,et al.  Model Checking Embedded C Software Using k-Induction and Invariants , 2015, 2015 Brazilian Symposium on Computing Systems Engineering (SBESC).

[32]  David Monniaux,et al.  Program Analysis with Local Policy Iteration , 2016, VMCAI.

[33]  Andreas Podelski,et al.  ULTIMATE KOJAK with Memory Safety Checks - (Competition Contribution) , 2015, TACAS.

[34]  Zvonimir Rakamaric,et al.  SMACK: Decoupling Source Language Details from Verifier Implementations , 2014, CAV.

[35]  Daniel Kroening,et al.  CBMC - C Bounded Model Checker - (Competition Contribution) , 2014, TACAS.

[36]  Dirk Beyer,et al.  Competition on Software Verification - (SV-COMP) , 2012, TACAS.

[37]  Tomás Vojnar,et al.  Optimized PredatorHP and the SV-COMP Heap and Memory Safety Benchmark - (Competition Contribution) , 2016, TACAS.