Why Tests Don't Pass

Most testers think of tests passing or failing. Either they found a bug or they didn’t. Unfortunately, experience shows us repeatedly that passing a test doesn’t really mean there is no bug. It is quite possible for a test to surface an error but it not be detected at the time. It is also possible for bugs to exist in the feature being tested in spite of the test of that capability. Passing really only means that we didn’t notice anything interesting. Likewise, failing a test is no guarantee that a bug is present. There could be a bug in the test itself, a configuration problem, corrupted data, or a host of other explainable reasons that do not mean that there is anything wrong with the software being tested. Failing really only means that something that was noticed warrants further investigation. The paper explains the ideas further, explores some of the implications, and suggests some ways to benefit from this new way of thinking about test outcomes. The paper concludes with examination of how to use this viewpoint to better prepare tests and report results.