The development of software for safety critical systems is guided by standards. Most standards identify processes for different safety integrity levels (SILs) or development assurance levels (DALs). Software is shown to be fit for use primarily by appeal to the standards, supported with appropriate evidence, e.g. from testing. The assumption is that software developed against the requirements of higher SILs will be less prone to critical failures. A paper at the last ISSC questioned this assumption, and proposed instead that an “evidence-based” approach be taken to software. To implement this type of approach requires arguments to reflect the contribution of software to safety in the context of the system. We believe that an “evidencebased” approach can be implemented by using a framework for articulating software safety arguments, based on categorisation of evidence, which is largely independent of the development process. This paper outlines our approach, and shows how the ideas can be presented within a safety case, without precluding the use of existing standards. A key motivation in producing the paper is to expose these rather unconventional views to critical review, and to seek to build acceptance of the principles. Introduction Most standards used to guide the development of software for safety critical systems, e.g. DO178B (ref. 1), DS 00-55 (ref. 2), and Part 3 of IEC 61508 (ref. 3), identify processes for different safety integrity levels (SILs) or development assurance levels (DALs). We have previously questioned the extent to which there is evidence that the approaches advocated by these standards are effective in practice (ref. 4-5). It is also widely agreed that these standards do not deal adequately with certain practical situations, e.g. use of legacy or commercial off the shelf (COTS) software. Also they do not deal with some modern practices, e.g. code generators. This is not to say that the standards give no guidance, merely that their emphasis is on the development of “new” software by conventional means. The aim in this paper is to show how to produce a framework for software safety evidence which can be applied independently of the process used for software development, but which doesn’t preclude the use of existing standards. The ideas presented are intended to be generic, but they were motivated by work in the aerospace sector. Current Standards: Current standards recommend a set of techniques to be used at each SIL or DAL. Both the developer and assessor accept that, by following the process of applying these techniques and developing evidence, the software achieves the required level of safety. However, the safety evidence generated does not necessarily give a quantitative demonstration that the SIL or DAL has been achieved. Also, due to the difficulty in detecting software failures in accidents, the commercial sensitivity of failure data, and the extremely high safety levels required of software, it is difficult to determine the operational levels of safety for developed software. Thus it is often not possible to assess before or during operation whether software produced to a SIL or DAL process attains the required level of safety. Previous research has shown the difficulties in quantitatively proving a software failure rate due to the systematic nature of software failures (ref. 6). Also there is some evidence to show that software developed by a process-based approach may not always meet the required level of assurance (ref. 7). This paper introduces a product-based framework for software safety evidence, instead of following a prescriptive process approach. The intention of this framework is to circumvent the problems associated with a processbased method described in the current standards.
[1]
Hoyt Lougee,et al.
SOFTWARE CONSIDERATIONS IN AIRBORNE SYSTEMS AND EQUIPMENT CERTIFICATION
,
2001
.
[2]
Tim Kelly,et al.
Arguing Safety - A Systematic Approach to Managing Safety Cases
,
1998
.
[3]
Bev Littlewood,et al.
Validation of ultrahigh dependability for software-based systems
,
1993,
CACM.
[4]
John A. McDermid,et al.
Software Safety: Where's the Evidence?
,
2001,
SCS.
[5]
David John Pumfrey,et al.
The principled design of computer system safety analyses
,
1999
.
[6]
J. McDermid,et al.
Software Safety: Why is there no Consensus?
,
2002
.