AFID: an automated fault identification tool

We present the Automatic Fault IDentification Tool (AFID). AFID automatically constructs repositories of real software faults by monitoring the software development process. AFID records both a fault revealing test case and a faulty version of the source code for any crashing faults that the developer discovers and a fault correcting source code change for any crashing faults that the developer corrects. The test cases are a significant contribution, because they enable new research that explores the dynamic behaviors of the software faults. AFID uses a ptrace-based monitoring mechanism to monitor both the compilation and execution of the application. The ptrace-based technique makes it straightforward for AFID to support a wide range of programming languages and compilers. Our benchmark results indicate that the monitoring overhead will be acceptable for most developers. We performed a short case study to evaluate how effectively the AFID tool records software faults. In our case study, AFID recorded 12 software faults from the 8 participants.

[1]  Thomas J. LeBlanc,et al.  Debugging Parallel Programs with Instant Replay , 1987, IEEE Transactions on Computers.

[2]  Thomas Zimmermann,et al.  Extraction of bug localization benchmarks from history , 2007, ASE.

[3]  Andreas Zeller,et al.  Mining metrics to predict component failures , 2006, ICSE.

[4]  Raphael R. Some,et al.  A software-implemented fault injection methodology for design and validation of system fault tolerance , 2001, 2001 International Conference on Dependable Systems and Networks.

[5]  John Steven,et al.  jRapture: A Capture/Replay tool for observation-based testing , 2000, ISSTA '00.

[6]  Erez Zadok,et al.  Rapid file system development using ptrace , 2007, ExpCS '07.

[7]  Andreas Zeller,et al.  When do changes induce fixes? , 2005, ACM SIGSOFT Softw. Eng. Notes.

[8]  Thomas Zimmermann,et al.  When do changes induce fixes? On Fridays , 2005 .

[9]  Ben Collins-Sussman,et al.  The subversion project: buiding a better CVS , 2002 .

[10]  PughWilliam,et al.  Software repository mining with Marmoset , 2005 .

[11]  Benjamin Livshits,et al.  DynaMine: finding common error patterns by mining software revision histories , 2005, ESEC/FSE-13.

[12]  Jong-Deok Choi,et al.  Deterministic replay of Java multithreaded applications , 1998, SPDT '98.

[13]  Daniel C. DuVarney,et al.  Model-carrying code: a practical approach for safe execution of untrusted applications , 2003, SOSP '03.

[14]  Gregg Rothermel,et al.  Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact , 2005, Empirical Software Engineering.

[15]  Andreas Zeller,et al.  Predicting vulnerable software components , 2007, CCS '07.

[16]  David Hovemeyer,et al.  Software repository mining with Marmoset: an automated programming project snapshot and testing system , 2005, ACM SIGSOFT Softw. Eng. Notes.

[17]  Chadd C. Williams,et al.  Bug Driven Bug Finders , 2004, MSR.

[18]  Gail C. Murphy,et al.  Predicting source code changes by mining change history , 2004, IEEE Transactions on Software Engineering.

[19]  Annie T. T. Ying,et al.  Predicting source code changes by mining revision history , 2003 .