Unresolved conflicts can introduce inconsistent and uncertain choices into a system description. Such inconsistencies imply nondeterminancy and nondeterminism is a bad thing; e.g. Leveson says that “nondeterminism is the enemy of reliability” [2]. Requirement engineers, on the other hand, argue that inconsistencies are a good thing [9]. To be human is to hold an opinion. To be an expert is to hold an opinion that others wish to pay for. Hence, a room of experts must argue, lest any one of them loses their income. Inconsistencies are not embarrassments that should be left undocumented. Rather, their detection, exploration and partial resolution are a powerful propellant to drive option discovery and documentation. Contrary to Leveson’s views, inconsistencies can make a system safer since unsafe systems typically result from an unexpected consequences. Exploring inconsistencies can drive a design into a zone that was not previously considered where unsafe possibilities can be recognized and repaired. But the arguments of requirements engineers may not convince the broader software engineering community. If we can’t demonstrate that some stable set of consequences can be inferred from a space of inconsistent assertions, then our requirements will appear unpredictable and untrustworthy. Theoretically, such a demonstration is intractable. Gabow et.al. [1] showed that building pathways across programs with inconsistent pairs (e.g. ) is NP-hard for all but the simplest software models (a software model is very simple if it is very small, or it is a simple tree, or it has a dependency networks with out-degree ). No fast and complete algorithm for NP-hard tasks has been discovered, despite decades of research. Hence, computing all the consequences from a space of inconsistencies can be impossibly slow, except for very small models. Empirical results offers new hope for the practicality of exploring a space of inconsistent choices. Menzies, Easterbrook, Nuseibeh and Waugh [8] found that most of the choices made within a space of conflicts had the same net effect. That study compared two search strategies. In full worlds search, one world of belief was forked for each possible resolution to some inconsistency. In random worlds search, when worlds are possible, one was picked at random. In a very large case study (over a million runs), Menzies, Easterbrook, Nuseibeh and Waugh found that the average difference in reachable goals between the random worlds search and full worlds search was less than (!!). These results can be explained via the funnel theory first proposed by Menzies, Easterbrook, Nuseibeh and Waugh [8], then elaborated by Menzies, Singh, Powell, and Kiper [4–7]. To introduce funnels, we first say that an argument space supports reasons; i.e. chains of reasoning that link inputs in a certain context to desired goals. Chains have links of at least two types. Firstly, there are links that clash with other links. Secondly, there are the links that depend on other links. For example, suppose the following argument space is explored using the invariant and everything that is not a ! " # or a %$ is open to debate: '& (*)+& ( &#(, -& (. ! " # & (0/1& (2 '& (.34&5(.67&5(98:& (2 %$ ! " # !; =?(, @ &5(,$ &5(.AB(, 8C& (2 %$ & (* D& (BE & (*FG&5(: H
[1]
Tim Menzies,et al.
How AI Can Help SE; or: Randomized Search Not Considered Harmful
,
2001,
Canadian Conference on AI.
[2]
B. Cukic,et al.
Testing nondeterminate systems
,
2000,
Proceedings 11th International Symposium on Software Reliability Engineering. ISSRE 2000.
[3]
Bashar Nuseibeh,et al.
An empirical investigation of multiple viewpoint reasoning in requirements engineering
,
1999,
Proceedings IEEE International Symposium on Requirements Engineering (Cat. No.PR00188).
[4]
Leon J. Osterweil,et al.
On Two Problems in the Generation of Program Test Paths
,
1976,
IEEE Transactions on Software Engineering.
[5]
Nancy G. Leveson,et al.
Safeware: System Safety and Computers
,
1995
.
[6]
Bashar Nuseibeh,et al.
Leveraging Inconsistency in Software Development
,
2000,
Computer.
[7]
Tim Menzies,et al.
How to Argue Less
,
2001
.
[8]
Bojan Cukic,et al.
Adequacy of Limited Testing for Knowledge Based Systems
,
2000,
Int. J. Artif. Intell. Tools.