Using observation and refinement to improve distributed systems test

Testing a distributed system is difficult. Good testing depends on both skill and understanding the system under test. We have developed a method to observe the system at the CORBA remote-procedure-call level and then use dynamic-query-based visualization to refine and improve the test cases. The method and accompanying tools have been tested and refined by using them as part of the software support effort for two distributed application, each having about 500 K lines of code. During this time the tools have been adapted to support testing by adding a scripting mechanism that permits the visualization tool to specify test reports. We also added parameter value observation and reporting. Finally, we added an active probing mechanism to induce faults and delays in order to stress the system under test. Our efforts have led to a substantial improvement in system test quality.

[1]  Douglas Comer,et al.  Probing TCP Implementations , 1994, USENIX Summer.

[2]  Peter Fritzson,et al.  Parforman - an Assertion Language for Specifying Behavior when Debugging Parallel Applications , 1996, Int. J. Softw. Eng. Knowl. Eng..

[3]  James M. Bieman,et al.  The relationship between test coverage and reliability , 1994, Proceedings of 1994 IEEE International Symposium on Software Reliability Engineering.

[4]  J. Larus Whole program paths , 1999, PLDI '99.

[5]  Stephen G. Eick,et al.  Seesoft-A Tool For Visualizing Line Oriented Software Statistics , 1992, IEEE Trans. Software Eng..

[6]  Peter Fritzson,et al.  PARFORMAN/spl minus/an assertion language for specifying behaviour when debugging parallel applications , 1993, 1993 Euromicro Workshop on Parallel and Distributed Processing.

[7]  Malcolm Munro,et al.  Comprehension with[in] virtual environment visualisations , 1999, Proceedings Seventh International Workshop on Program Comprehension.

[8]  Ben Shneiderman,et al.  Visual Information Seeking: Tight Coupling of Dynamic Query Filters with Starfield Displays , 1994 .

[9]  Johan Moe,et al.  Using execution trace data to improve distributed systems , 2002, Softw. Pract. Exp..

[10]  Srinivasan Seshan,et al.  SPAND: Shared Passive Network Performance Discovery , 1997, USENIX Symposium on Internet Technologies and Systems.

[11]  Ivar Jacobson,et al.  Object-oriented software engineering - a use case driven approach , 1993, TOOLS.

[12]  Sudipto Ghosh,et al.  Issues in Testing Distributed Component-Based Systems , 1999 .

[13]  John D. Musa,et al.  Operational profiles in software-reliability engineering , 1993, IEEE Software.

[14]  Laurent Cailleteau Interfaces for Visualizing Multi-Valued Attributes: Design and Implementation Using Starfield Displays , 1999 .

[15]  Anneliese Amschler Andrews,et al.  Program understanding behavior during adaptation of large scale software , 1998, Proceedings. 6th International Workshop on Program Comprehension. IWPC'98 (Cat. No.98TB100242).

[16]  Johan Moe,et al.  Understanding distributed systems via execution trace data , 2001, Proceedings 9th International Workshop on Program Comprehension. IWPC 2001.

[17]  Dorothy Graham,et al.  Software test automation: effective use of test execution tools , 1999 .

[18]  Rudolf K. Keller,et al.  Pattern visualization for software comprehension , 1998, Proceedings. 6th International Workshop on Program Comprehension. IWPC'98 (Cat. No.98TB100242).

[19]  Doug Kimelman,et al.  Modeling Object-Oriented Program Execution , 1994, ECOOP.

[20]  Massachusett Framingham,et al.  The Common Object Request Broker: Architecture and Specification Version 3 , 2003 .

[21]  Rachel Jane McCrindle,et al.  Software visualisation using C++ lenses , 1999, Proceedings Seventh International Workshop on Program Comprehension.

[22]  Bjørn N. Freeman-Benson,et al.  Visualizing dynamic software system information through high-level models , 1998, OOPSLA '98.