The project Deconstructive Evaluation of Risk In Dependability Arguments and Safety Cases (DERIDASC) has recently experimented with techniques borrowed from literary theory as safety case analysis techniques. This paper introduces our high-level method for “deconstructing” safety arguments. Our approach is quite general and should be applicable to different types of safety argumentation framework. As one example, we outline how the approach would work in the context of the Goal Structure Notation (GSN). 1 Deconstruction in a Safety Context French philosopher Jacques Derrida’s concept of deconstruction rests upon the idea that, ironically enough, the meaning of an argument is a function of observations that it excludes as irrelevant and the perspectives that it opposes either implicitly or explicitly. On the one hand, if we recognise an opposing argument explicitly, we might be tempted to misrepresent it as weaker than we really feel it to be; but if this misrepresentation is detected, or if our own arguments do not convince, we may succeed only in perpetuating the opposing view. On the other hand, if we try to suppress our acknowledgment of credible doubt, we leave the reader mystified as to why we feel the need to argue our conclusion. To ‘deconstruct’ an argument is to try to detect such failures of “closure”. Such failures need not necessarily lead one to an opposed conclusion (Armstrong & Paynter 2003, Armstrong 2003). A deconstruction of an argument tries to show how the argument undercuts itself with acknowledgements of plausible doubts about its conclusion and betrays a nervous desire for the truth of assumptions and conclusions rather than unshakeable confidence. This perspective recognizes that deductive argument is unequal to the tasks of resolving contradictions and unifying the different explanatory narratives that underlie our debates. The deconstruction of a deductive argument has two stages. The reversal stage develops a counter-argument from clues offered within the original argument; the displacement stage compares the two arguments. In the safety assessment context we view reversal as an opportunity for the reassessment of the existing safety acceptance criteria. A safety argument is required to be inferentially valid in some sense and its empirical premises must be justified in such a way that they seem plausible. Empirical 4 J.M. Armstrong and S.E. Paynter claims can attain the status of knowledge only by means of supporting evidence of varying reliability. This is recognized in logics of justified belief that allow premises to be “warranted” to differing degrees; for example, Toulmin (1958). Starting with the reversal stage of safety argument deconstruction we ignore the warrantedness of the premises: instead, we try to produce a counter-argument that seems warrantable. Hence we provisionally assume that we could find sufficient evidence for justified belief in our counter-argument. In the displacement stage we deal with the relative strength of the warrants and backing evidence for both argument and counterargument. Hopefully, after reversal we will be able to see that one argument (or both) is (are) unsatisfactory and act accordingly (either accept the system or require more risk reduction). However, there is a possibility that we get two opposing arguments that are “sufficiently” warranted. A deconstruction must explicitly recognize and analyze this particular failure of “closure”. To question the “closure” of an argument is to try and find a possibility that has been excluded but which when re-introduced undermines faith in the argument by suggesting a plausible counter-argument. Thus the process of deconstruction is in the final analysis adversarial. Section 2 of this paper presents a brief example of safety argument deconstruction using the Goal Structuring Notation (GSN). As yet we have no pragmatic justification (e.g. cost-benefit) for the use of safety argument deconstruction in safety processes. Therefore, in Section 3 we confine ourselves to a philosophical justification in terms of the lack of deductive closure in any non-absolute argument: we show that when safety decision makers act upon “sufficiently justified” beliefs – as they do when they accept or reject safety-critical systems – they are necessarily committing themselves to a variant of the ‘lottery paradox’. We explain this using a Warranted Deduction Schema we have developed for the comparison of arguments and counter-arguments. Sections 4 examines political aspects of deconstruction in terms of the Warranted Deduction Schema. Section 5 outlines future issues in the pragmatic justification of safety argument deconstruction. 2 An Example: The Goal Structuring Notation The example deconstruction in this section is done in the context of the Goal Structuring Notation (GSN) and is adapted from Kelly (1998). The example argues a sufficiency of protection against a risk of catastrophic failure. In the source text, the example is only part of a larger GSN argument and thus some of the questions we put are answered there or are not relevant. We have taken the example out of its original context to illustrate the process of deconstruction. GSN is intended to make the structure of arguments clearer than in free text. Thus it provides a neutral and convenient format for the (de)construction of safety counter-arguments. GSN specifies: – Goals (best expressed as predicates) – Goal Decomposition (top down) – Strategies (for explaining goal decompositions) – Solutions (direct information sources) – Justifications (for explaining rationale) – Assumptions The Deconstruction of Safety Arguments Through Adversarial Counter-Argument 5
[1]
Bettina Buth.
Formal and semi-formal methods for the analysis of industrial control systems
,
2001,
BISS monographs.
[2]
Randal E. Bryant,et al.
Symbolic Boolean manipulation with ordered binary-decision diagrams
,
1992,
CSUR.
[3]
R. D. Hawkins.
Performing Hazard and Safety Analysis of Object Oriented Systems
,
2002
.
[4]
E. Palmer,et al.
An automated method to detect potential mode confusions
,
1999,
Gateway to the New Millennium. 18th Digital Avionics Systems Conference. Proceedings (Cat. No.99CH37033).
[5]
Joachim Wegener,et al.
Evolutionary test environment for automatic structural testing
,
2001,
Inf. Softw. Technol..
[6]
Holger Giese,et al.
Towards the compositional verification of real-time UML designs
,
2003,
ESEC/FSE-11.
[7]
Olivier Coudert,et al.
Application of a new logically complete ATMS to digraph and network-connectivity analysis
,
1994,
Proceedings of Annual Reliability and Maintainability Symposium (RAMS).
[8]
D. Woods,et al.
Automation Surprises
,
2001
.
[9]
Veikko Seppänen,et al.
Strategic needs and future trends of embedded software
,
1997
.
[10]
Jon Damon Reese,et al.
Analyzing Software Specifications for Mode Confusion Potential
,
1998
.
[11]
Olivier Coudert,et al.
Fault Tree Analysis: 1020 Prime Implicants and Beyond
,
1993
.
[12]
P Miller Steven,et al.
Detecting Mode Confusion Through Formal Modeling and Analysis
,
1999
.
[13]
Holger Giese,et al.
Multi-Agent System Design for Safety-Critical Self-Optimizing Mechatronic Systems with UML
,
2003
.
[14]
John A. Clark,et al.
Automated program flaw finding using simulated annealing
,
1998,
ISSTA '98.
[15]
Nancy G. Leveson,et al.
Designing automation to reduce operator errors
,
1997,
1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation.
[16]
Yahia Lebbah,et al.
Solving Constraints over Floating-Point Numbers
,
2001,
CP.
[17]
Paulo J,et al.
Industrial use of safety-related artificial neural networks
,
2001
.
[18]
John Rushby,et al.
Using model checking to help discover mode confusions and other automation surprises
,
2002,
Reliab. Eng. Syst. Saf..
[19]
Victor Carreño,et al.
Analyzing Mode Confusion via Model Checking
,
1999,
SPIN.
[20]
David L. Dill,et al.
The Murphi Verification System
,
1996,
CAV.