A Survey of Formal Verification Approaches for Practical Systems

The development of any large scale software systems often involves the discovery and elimination of an enormous amount of bugs. Linux kernel bug tracker currently tracks 2830 known bugs as of April 2015, with many bugs that are likely still unknown [3]. A bug in Google LevelDB prevents users from storing block chains and participating in the Bitcoin network [1]. The absence of checksum in an internal state caused Amazon S3 to become unavailable for hours in 2008 [2]. To ensure the reliability of systems, best practices in industry often rely on frequent peer code reviews, extensive test suites and occasionally static and dynamic analysis. However, despite significant effort spent in eliminating programming errors, practical systems deployed today still suffer from frequent errors, sometimes leading to catastrophic consequences [2]. Formal verification is the only known way to ensure that a system is completely free of bugs and that the system is behaving correctly according to a set of high-level specifications. However, to formulate a precise program specification and to provide a formal proof that an implementation meets the specification is often a significant undertaking. The conventional wisdom in the systems community is that the costs of producing a formal proof far outweigh its benefits. The reality is that the technology in program verification is becoming mature enough that many recent projects have formally verified systems of scale, hitherto thought to be impractical. The variety of existing approaches, however, are diverse in their goals, tools and end-to-end guarantees. We provide a systematic survey of existing formal verification approaches while critically examining each approach to answer our key question: how can practical systems programming effectively use formal verification to ensure reliability? Specifically, we give a taxonomy of existing formal verification approaches, classifying each based on the following aspects: