Efficient and Trustworthy Tool Qualification for Model-Based Testing Tools

The application of test automation tools in a safety-critical context requires so-called tool qualification according to the applicable standards. The objective of this qualification is to justify that verification steps automated by the tool will not lead to faulty systems under test to be accepted as fit for purpose. In this paper we review the tool qualification requirements of the standards ISO 26262 (automotive domain) and the new RTCA DO-178C (avionic domain) and propose a general approach on how to qualify model-based testing tools according to these standards in an efficient and at the same time reliable way. Our approach relies on a lightweight error detection mechanism based on the idea of replaying test executions against the model. We further show how the error detection capabilities can be integrated into a convincing argument for tool qualification, going through the necessary verification activities step-by-step. We highlight the key steps for the RT-Tester Model-Based Test Generator, which is used in test campaigns in the automotive, railway and avionic domains. The approach avoids having to qualify several complex components present in model-based testing tools, such as code generators for test procedures and constraint solving algorithms for test data elaboration.

[1]  Michael Norrish,et al.  seL4: formal verification of an operating-system kernel , 2010, Commun. ACM.

[2]  Patrick Cousot,et al.  Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints , 1977, POPL.

[3]  Mark Blackburn,et al.  T-VEC: a tool for developing critical systems , 1996, Proceedings of 11th Annual Conference on Computer Assurance. COMPASS '96.

[4]  Xavier Leroy,et al.  Formal verification of a realistic compiler , 2009, CACM.

[5]  Hoyt Lougee,et al.  SOFTWARE CONSIDERATIONS IN AIRBORNE SYSTEMS AND EQUIPMENT CERTIFICATION , 2001 .

[6]  Joachim Hillebrand,et al.  Establishing Confidence in the Usage of Software Tools in Context of ISO 26262 , 2011, SAFECOMP.

[7]  Marc Pantel,et al.  Towards Formally Verified Optimizing Compilation in Flight Control Software , 2011, PPES.

[8]  Jean Souyris,et al.  Applying Formal Proof Techniques to Avionics Software: A Pragmatic Approach , 1999, World Congress on Formal Methods.

[9]  S. Anderson,et al.  Secure Synthesis of Code: A Process Improvement Experiment , 1999, World Congress on Formal Methods.

[10]  Jan Peleska,et al.  Automated Test Case Generation with SMT-Solving and Abstract Interpretation , 2011, NASA Formal Methods.

[11]  David Clark,et al.  Safety and Security Analysis of Object-Oriented Models , 2002, SAFECOMP.

[12]  Virginie Wiels,et al.  Formal Verification of Avionics Software Products , 2009, FM.

[13]  Ana Cavalcanti,et al.  FM 2009: Formal Methods, Second World Congress, Eindhoven, The Netherlands, November 2-6, 2009. Proceedings , 2009, FM.

[14]  Richard H. Carver,et al.  Replay and testing for concurrent programs , 1991, IEEE Software.

[15]  Jan Peleska,et al.  Timed Moore Automata: Test Data Generation and Model Checking , 2010, 2010 Third International Conference on Software Testing, Verification and Validation.

[16]  Magnus O. Myreen Verified just-in-time compiler on x86 , 2010, POPL '10.