ConTest listeners: a concurrency-oriented infrastructure for Java test and heal tools

With the proliferation of the new multi-core personal computers, and the explosion of the usage of highly concurrent machine configuration, concurrent code moves from being written by the select few to the masses. As anyone who has written such code knows, there are many traps awaiting. This increases the need for good concurrency-aware tools to be used in the program quality cycle: monitoring, testing, debugging, and the emerging field of self-healing. Academics who build such tools face two main difficulties; writing the instrumentation infrastructure for the tool, and integrating it into real user environments to obtain meaningful results. As these difficulties are hard to overcome, most academic tools do not make it past the toy stage. The ConTest Listener architecture provides instrumentation and runtime engines to which writers of test and heal tools, especially concurrency-oriented, can easily plug their code. This paper describes this architecture, mainly from the point of view of a user intending to create a testing/healing tool. This architecture enables tool creators to focus on the concurrent problem they are trying to solve without writing the entire infrastructure. In addition, once they create a tool within the ConTest Listeners framework the tool can be used by the framework users with no additional work, enabling access to real industrial applications. We show how to create tools using the architecture and describe some work that has already taken place.

[1]  Klaus Havelund,et al.  Benchmark and framework for encouraging research on multi-threaded testing tools , 2003, Proceedings International Parallel and Distributed Processing Symposium.

[2]  Wilson C. Hsieh,et al.  Runtime aspect weaving through metaprogramming , 2002, AOSD '02.

[3]  Klaus Havelund,et al.  Towards a framework and a benchmark for testing tools for multi-threaded programs: Research Articles , 2007 .

[4]  Shmuel Ur,et al.  Multi-threaded Testing with AOP Is Easy, and It Finds Bugs! , 2005, Euro-Par.

[5]  Eitan Farchi,et al.  Applications of synchronization coverage , 2005, PPoPP.

[6]  Michael Burrows,et al.  Eraser: a dynamic data race detector for multithreaded programs , 1997, TOCS.

[7]  Jim Shore,et al.  Fail Fast , 2004, IEEE Softw..

[8]  Jong-Deok Choi,et al.  Deterministic replay of Java multithreaded applications , 1998, SPDT '98.

[9]  Klaus Havelund,et al.  Towards a framework and a benchmark for testing tools for multi‐threaded programs , 2007, Concurr. Comput. Pract. Exp..

[10]  Armin Biere,et al.  High‐level data races , 2003, Softw. Test. Verification Reliab..

[11]  Scott D. Stoller,et al.  Model-checking multi-threaded distributed Java programs , 2000, International Journal on Software Tools for Technology Transfer.

[12]  Shmuel Ur,et al.  Toward Automatic Concurrent Debugging Via Minimal Program Mutant Generation with AspectJ , 2007, TV@FLoC.

[13]  Eitan Farchi,et al.  Multithreaded Java program test generation , 2001, JGI '01.