Limits to Dependability Assurance--A Controversy Revisited

More than twenty years ago, as computers were introduced into safety-critical roles in civil aircraft, there was much debate about what claims could be made for their dependability. Much of the debate focused, naturally enough, on what could be claimed for the reliability of software. A famous example was the apparent need to claim a probability of failure of less than 10**-9 per hour for some flight-critical avionics. Several authors (I was one) demonstrated that such claims were several orders of magnitude beyond what could be supported with scientific rigour. In this talk I shall revisit this debate, showing some advances that have been made in "dependability cases," particularly involving formal notions of "confidence" in dependability claims. However, I shall also show that the bottom line has not changed significantly: although some systems have been shown to have extremely high dependability/ after the fact/ (i.e. in extensive operational use), it still remains impossible to show/ before using it/ that a system will be extremely dependable in operation. The reason is an unforgiving law about the extensiveness of evidence needed to make very strong dependability claims. These limits to assurance should be of interest beyond the technical community: for example, they pose difficult questions for society in estimating the risks associated with the deployment of certain novel systems.