PreInfer: Automatic Inference of Preconditions via Symbolic Analysis

When tests fail (e.g., throwing uncaught exceptions), automatically inferred preconditions can bring various debugging benefits to developers. If illegal inputs cause tests to fail, developers can directly insert the preconditions in the method under test to improve its robustness. If legal inputs cause tests to fail, developers can use the preconditions to infer failure-inducing conditions. To automatically infer preconditions for better support of debugging, in this paper, we propose PREINFER, a novel approach that aims to infer accurate and concise preconditions based on symbolic analysis. Specifically, PREINFER includes two novel techniques that prune irrelevant predicates in path conditions collected from failing tests, and that generalize predicates involving collection elements (i.e., array elements) to infer desirable quantified preconditions. Our evaluation on two benchmark suites and two real-world open-source projects shows PREINFER's high effectiveness on precondition inference and its superiority over related approaches.

[1]  William G. Griswold,et al.  Dynamically discovering likely program invariants to support program evolution , 1999, Proceedings of the 1999 International Conference on Software Engineering (IEEE Cat. No.99CB37002).

[2]  Manuel Costa,et al.  Bouncer: securing software by blocking bad input , 2008, WRAITS '08.

[3]  Yang Liu,et al.  Proteus: computing disjunctive loop summary via path dependency analysis , 2016, SIGSOFT FSE.

[4]  Nikolai Tillmann,et al.  Characteristic studies of loop problems for structural test generation via symbolic execution , 2013, 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[5]  Swarat Chaudhuri,et al.  Dynamic inference of likely data preconditions over predicates by tree learning , 2008, ISSTA '08.

[6]  Todd D. Millstein,et al.  Data-driven precondition inference with learned features , 2016, PLDI.

[7]  Patrick Cousot,et al.  Precondition Inference from Intermittent Assertions and Application to Contracts on Collections , 2011, VMCAI.

[8]  Nikolai Tillmann,et al.  DySy: dynamic symbolic execution for invariant inference , 2008, ICSE.

[9]  Dimitar Dimitrov,et al.  Learning Commutativity Specifications , 2015, CAV.

[10]  Patrice Godefroid Higher-order test generation , 2011, PLDI '11.

[11]  Koushik Sen DART: Directed Automated Random Testing , 2009, Haifa Verification Conference.

[12]  Bertrand Meyer,et al.  What good are strong specifications? , 2012, 2013 35th International Conference on Software Engineering (ICSE).

[13]  Fan Long,et al.  Sound input filter generation for integer overflow errors , 2014, POPL.

[14]  Patrice Godefroid,et al.  Automatic partial loop summarization in dynamic test generation , 2011, ISSTA '11.

[15]  Daniel Kroening,et al.  Counterexample-Guided Precondition Inference , 2013, ESOP.

[16]  Jorge A. Navas,et al.  TRACER: A Symbolic Execution Tool for Verification , 2012, CAV.

[17]  Nikolai Tillmann,et al.  Parameterized Unit Testing with Pex , 2008, TAP.

[18]  Nikolai Tillmann,et al.  Pex-White Box Test Generation for .NET , 2008, TAP.

[19]  Patrick Cousot,et al.  Automatic Inference of Necessary Preconditions , 2013, VMCAI.

[20]  Nikolai Tillmann,et al.  Transferring an automated test generation tool to practice: from pex to fakes and code digger , 2014, ASE.

[21]  Miguel Castro,et al.  Vigilante: end-to-end containment of internet worms , 2005, SOSP '05.