ARCH-COMP21 Repeatability Evaluation Report
暂无分享,去创建一个
This report presents the results of the repeatability evaluation for the 5th Interna- tional Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP’21). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2021, affiliated with the 7th IFAC Conference on Analysis and Design of Hybrid Systems (ADHS’21). In its fifth edition, seventeen tools submitted artifacts through a Git repository for the repeatability evaluation, applied to solve bench- mark problems for seven competition categories. The majority of participants adhered to the requirements for this year’s repeatability evaluation, namely to submit scripts to automatically install and execute tools in containerized virtual environments (specifically Dockerfiles to execute within Docker), and several categories used performance evalua- tion information from a common execution platform. The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.