In 1990, we submitted a paper to the Communications of the Association for Computing Machinery, with the title “Validation of Ultra-High Dependability for Software-based Systems” [Littlewood, 1993]. The immediate trigger for the discussions that led to that paper were the requirements of failure probability of less than 10 -9 per hour, or per cycle, for some safety-critical equipment in civil aircraft. We thought that the then-typical approach to this issue (codified in the DO-178B document) did not inspire confidence. We paraphrased (some people said caricatured) the position taken in DO-178B as “a very low failure probability is required but, since its achievement cannot be proven in practice, some other, insufficient method of certification will be adopted”. We also predicted that both this kind of extreme requirements, and the inadequate justification of their satisfaction, would spread to many more systems and industrial sectors, as they have. Back then, different people had different takes on the issue, but our concerns were widely shared. Two years later, for example, Ricky Butler and George Finelli, from NASA, submitted to the IEEE Transactions on Software Engineering a paper with the title “The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software” [Butler, 1993]. This anniversary of the SCSC falls about 20 years later, so it seems a good time to revisit briefly our article and see where the debate about these issues now stands. Our paper’s main points were: ∞ modern society depends on computers for a number of critical tasks in which failure can have very high costs ∞ thus, high levels of dependability (reliability, safety, etc.) are often required ∞ risk should be assessed quantitatively, so o these requirements must be stated in quantitative terms, and o a rigorous demonstration of their attainment is necessary ∞ for software-based systems used in the most critical roles, such demonstrations are not usually supplied
[1]
G. B. Finelli,et al.
The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software
,
1993,
IEEE Trans. Software Eng..
[2]
Lorenzo Strigini,et al.
Assessing the risk due to software faults: estimates of failure rate versus evidence of perfection
,
1998
.
[3]
David Wright,et al.
Confidence: Its Role in Dependability Cases for Risk Assessment
,
2007,
37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN'07).
[4]
Bev Littlewood,et al.
Reasoning about the Reliability of Diverse Two-Channel Systems in Which One Channel Is "Possibly Perfect"
,
2012,
IEEE Transactions on Software Engineering.
[5]
Bev Littlewood,et al.
Validation of ultrahigh dependability for software-based systems
,
1993,
CACM.
[6]
David Wright,et al.
The Use of Multilegged Arguments to Increase Confidence in Safety Claims for Software-Based Systems: A Study Based on a BBN Analysis of an Idealized Example
,
2007,
IEEE Transactions on Software Engineering.
[7]
Lorenzo Strigini,et al.
Assessing the Risk due to Software Faults: Estimates of Failure Rate versus Evidence of Perfection
,
1998,
Softw. Test. Verification Reliab..