Verifying Operating System Security

A confined program is one which is unable to leak information to an unauthorized party or modify unauthorized resources. Confinement is an essential feature of any secure component-based system. This paper presents a proof of correctness of the EROS operating system architecture with respect to confinement. We give a formal statement of the requirements, construct a model of the architecture’s security policy and operational semantics, and show that the architecture enforces the confinement requirements if a small number of initial static checks on the confined subsystem are satisfied. The mechanism does not rely on the run-time values of user state or analysis of the programs’ algorithm(s). Our verification methodology borrows heavily from techniques developed in the programming languages community. We view the operating system as a programming language whose operations are the kernel calls. This has the advantage that the security requirements of concern can be stated in forms analogous to those of type inference and type soundness – which programming language techniques are well suited to deal with. The proof identifies a set of necessary fundamental lemmas that any system must observe in order to be able to confine information flow. The method used generalizes to any capability system.