State space exploration using feedback constraint generation and Monte-Carlo sampling

The systematic exploration of the space of all the behaviours of a software system forms the basis of numerous approaches to verification. However, existing approaches face many challenges with scalability and precision. We propose a framework for validating programs based on statistical sampling of inputs guided by statically generated constraints, that steer the simulations towards more "desirable" traces. Our approach works iteratively: each iteration first simulates the system on some inputs sampled from a restricted space, while recording facts about the simulated traces. Subsequent iterations of the process attempt to steer the future simulations away from what has already been seen in the past iterations. This is achieved by two separate means: (a) we perform symbolic executions in order to guide the choice of inputs, and (b) we sample from the input space using a probability distribution specified by means of previously observed test data using a Markov Chain Monte-Carlo (MCMC) technique. As a result, the sampled inputs generate traces that are likely to be significantly different from the observations in the previous iterations in some user specified ways. We demonstrate that our approach is effective. It can rapidly isolate rare behaviours of systems that reveal more bugs.

[1]  David G. Chinnery,et al.  A functional validation technique: biased-random simulation guided by observability-based coverage , 2001, Proceedings 2001 IEEE International Conference on Computer Design: VLSI in Computers and Processors. ICCD 2001.

[2]  Sarfraz Khurshid,et al.  Test input generation with java PathFinder , 2004, ISSTA '04.

[3]  Haifeng Chen,et al.  Multi-resolution Abnormal Trace Detection Using Varied-length N-grams and Automata , 2005, ICAC.

[4]  David Notkin,et al.  Symstra: A Framework for Generating Object-Oriented Unit Tests Using Symbolic Execution , 2005, TACAS.

[5]  Patrick Cousot,et al.  A static analyzer for large safety-critical software , 2003, PLDI.

[6]  Shin Nakajima,et al.  The SPIN Model Checker : Primer and Reference Manual , 2004 .

[7]  Zijiang Yang,et al.  F-Soft: Software Verification Platform , 2005, CAV.

[8]  William R. Bush,et al.  A static analyzer for finding dynamic programming errors , 2000, Softw. Pract. Exp..

[9]  Rupak Majumdar,et al.  Hybrid Concolic Testing , 2007, 29th International Conference on Software Engineering (ICSE'07).

[10]  David Thomas,et al.  The Art in Computer Programming , 2001 .

[11]  Michael D. Ernst,et al.  Dynamically discovering likely program invariants , 2000 .

[12]  Dawson R. Engler,et al.  Execution Generated Test Cases: How to Make Systems Code Crash Itself , 2005, SPIN.

[13]  Bart Selman,et al.  Towards Efficient Sampling: Exploiting Random Walk Strategies , 2004, AAAI.

[14]  Michael D. Ernst,et al.  Static verification of dynamically detected program invariants: Integrating Daikon and ESC/Java , 2001, RV@CAV.

[15]  Dawson R. Engler,et al.  Proceedings of the 5th Symposium on Operating Systems Design and Implementation Cmc: a Pragmatic Approach to Model Checking Real Code , 2022 .

[16]  L. Clarke,et al.  Property Inference from Program Executions ∗ , 2006 .

[17]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).

[18]  S. Chib,et al.  Understanding the Metropolis-Hastings Algorithm , 1995 .

[19]  Donald Ervin Knuth,et al.  The Art of Computer Programming , 1968 .

[20]  Eitan Farchi,et al.  Multithreaded Java program test generation , 2002, IBM Syst. J..

[21]  Daniel Kroening,et al.  A Tool for Checking ANSI-C Programs , 2004, TACAS.

[22]  Koen Claessen,et al.  QuickCheck: a lightweight tool for random testing of Haskell programs , 2000, ICFP.

[23]  Koushik Sen DART: Directed Automated Random Testing , 2009, Haifa Verification Conference.

[24]  William R. Bush,et al.  A static analyzer for finding dynamic programming errors , 2000 .

[25]  Yannis Smaragdakis,et al.  JCrasher: an automatic robustness tester for Java , 2004, Softw. Pract. Exp..

[26]  Nando de Freitas,et al.  An Introduction to MCMC for Machine Learning , 2004, Machine Learning.

[27]  Gary McGraw,et al.  Generating Software Test Data by Evolution , 2001, IEEE Trans. Software Eng..

[28]  Vibhav Gogate,et al.  A New Algorithm for Sampling CSP Solutions Uniformly at Random , 2006, CP.

[29]  Avi Ziv,et al.  Coverage directed test generation for functional verification using Bayesian networks , 2003, Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451).

[30]  Manuvir Das,et al.  Perracotta: mining temporal API rules from imperfect traces , 2006, ICSE.

[31]  Koushik Sen,et al.  CUTE: a concolic unit testing engine for C , 2005, ESEC/FSE-13.

[32]  Bogdan Korel,et al.  Automated test data generation for programs with procedures , 1996, ISSTA '96.

[33]  Adnan Aziz,et al.  Rarity based guided state space search , 2001, GLSVLSI '01.

[34]  Klaus Havelund,et al.  Model Checking Programs , 2004, Automated Software Engineering.

[35]  David L. Dill,et al.  The Murphi Verification System , 1996, CAV.