Editorial: Software testing is an elephant

This issue contains two papers. The first, An analysis technique to increase testability of objectoriented components, by Kansomkeat and Rivepiboon, examines the problem of testability of object-oriented software components when the source is not available but something like bytecode is. OO components have low testability because information hiding obscures the state that needs to be controlled and monitored during testing. This research uses a clever idea of extracting a control and data flow graph, which is then used to increase both controllability and observability, making it easier to detect faults. The second paper, The determination of optimal software release times at different confidence levels with consideration of learning effects, by Ho, Fang and Huang, uses stochastic differential equations to build a software reliability model. This model is validated on data that were published in six previous papers. The results will help project managers decide when to release software to maximize its reliability. We all have probably heard the parable about the blind men and the elephant. Each blind man touches a different part of the elephant and describes what he feels. The men get into an argument, which leads to physical violence, and they resolve the conflict with outside help. In some versions the conflict is never resolved. Last spring I attended a workshop on software system testing and was lucky enough to find a wise person who helped dispel some of my blindness by understanding several types of test activities. Most of my research has focused on using test criteria to help design software tests. Test design is amenable to objective, quantitative assessments of the tests, as well as automatic generation of test values, my first research love. Criteria-based test design requires knowledge of mathematics and of programming—it is an engineering approach. An equally important way to generate tests is from human intuition. Objective test criteria can overlook special situations that smart people with knowledge of the domain, testing and user interfaces will easily see. Both approaches are intellectually stimulating, rewarding and challenging, but they appeal to people with different backgrounds. These two approaches are also complementary and, in most projects, help equally to create high-quality software. For efficient and effective testing, especially as software evolves through hundreds or thousands of versions, we also need to automate our tests. Test automation involves programming that is usually relatively straightforward, involving scripting languages, frameworks such as JUnit or capture/replay tools. Test automation requires little knowledge of theory, of algorithms, or about the domain, but test scripts must often solve tricky problems with controllability. Many companies focus heavily on test execution. If we combine test design with test execution and do not use automation, test execution is very hard. This approach is also inefficient and usually