Jester - a JUnit test tester
暂无分享,去创建一个
Extreme programmers have confidence in their code if it passes their unit tests. More experienced extreme programmers only have confidence in their code if they also have confidence in their tests. A technique used by extreme programmers to gain confidence in their tests is to make sure that their tests spot deliberate errors in the code. This sort of manual test testing is either time consuming or very superficial. Jester is a test tester for JUnit tests; it modifies the source in a variety of ways, and checks whether the tests fail for each modification. Jester indicates code changes that can be made that do not cause the tests to fail. If code can be modified without the tests failing, it either indicates that there is a test missing or that the code is redundant. Jester can be used to gain confidence that the existing tests are adequate, or give clues about the tests that are missing. Jester is different than code coverage tools, because it can find code that is executed by the running of tests but not actually tested. Jester will be compared with conventional code coverage tools. Results of using Jester will be discussed. 1 INTRODUCTION Extreme programmers[1] have confidence in code if it passes tests, and have confidence in tests if they catch errors. Many extreme programmers temporarily put deliberate errors in their code to check that their tests catch those errors, before correcting the code to pass the tests. In some project teams, a project saboteur[4] is appointed, whose role is to verify that errors that they deliberately introduce to a copy of the code base are caught by the tests. Jester performs similar test testing mechanically, by making some change to a source file, recompiling that file, running the tests, and if the tests pass Jester displays a message saying what it changed. Jester makes its changes one at a time, to every source file in a directory tree, making many different changes to each source file. The different types of change made are
[1] Kent Beck,et al. Test-infected: programmers love writing tests , 2000 .
[2] Brian L. Meek,et al. The effectiveness of error seeding , 1989, SIGP.