Quality of protection: measuring the unmeasurable?

Security is an abstraction. Even the technique used by Justice Stewart to define hard core pornography does not appear to be applicable as we are unable to discern those characteristics of a system that would lead us to believe that it is secure by casual inspection. Indeed, we may be unable to do so, even after a prolonged and detailed evaluation. Many years ago, I designed an experiment that attempted to evaluate the reliability of N-version fault tolerant software. This paradigm states that, under the assumptions 1) that multiple versions of the software fail independently and 2) that the mechanism used to resolve differences is perfect, the reliability of a system using N versions will be the product of the reliabilities of the individual versions. While the assumptions are reasonable for hardware where failures are (more or less) randomly distributed over time, they are problematic for software in which failures are distributed in the data domain and tend to cluster in “hard” parts of the problem space. Since even a small percentage of correlated failures has a major effect on the reliability gain, the paradigm is largely ineffective. Unfortunately, reaching this conclusion was not easy. Early workers in the field thought that independent failures would be the rule providing that programmers worked independently. I termed this the “prayer for diversity approach” because it seemed to me to be akin to praying that the different programmers did not make the same mistakes. Other approaches were equally ineffective. A major avionics development divided