Exterminating bugs via collective information recycling

End-user software is executed billions of times daily, but the corresponding execution details (“by-products”) are discarded. We hypothesize that, if suitably captured and aggregated, these by-products could substantially speed up the process of testing programs and proving them correct. Ironically, both testing and debugging involve simulating real-world conditions and executions, in essence trying to recreate in the lab some of these (previously available, but discarded) execution details. This position paper proposes a way to recoup the execution information that is lost during everyday software use, aggregate it, and automatically turn it into bug fixes and proofs. The goal is to enable software to improve itself by “learning” from past failures and successes, leveraging the information-rich execution by-products that today are being wasted. We view every execution of a program as a test run and aggregate executions across the lifetime of a program into one gigantic test suite — i.e., we remove the distinction between software use and software testing and verification — with the purpose of substantially reducing software bug density.

[1]  Michael D. Ernst,et al.  Automatically patching errors in deployed software , 2009, SOSP '09.

[2]  Andrew M. Kuhn,et al.  Code Complete , 2005, Technometrics.

[3]  George Candea,et al.  Parallel symbolic execution for automated real-world software testing , 2011, EuroSys '11.

[4]  Dawson R. Engler,et al.  KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs , 2008, OSDI.

[5]  Pierre Wolper,et al.  An Automata-Theoretic Approach to Automatic Program Verification (Preliminary Report) , 1986, LICS.

[6]  Xuezheng Liu,et al.  Usenix Association 8th Usenix Symposium on Operating Systems Design and Implementation R2: an Application-level Kernel for Record and Replay , 2022 .

[7]  Greg Nelson,et al.  Extended static checking for Java , 2002, PLDI '02.

[8]  Miguel Castro,et al.  Better bug reporting with better privacy , 2008, ASPLOS 2008.

[9]  A. Stuart,et al.  Portfolio Selection: Efficient Diversification of Investments , 1959 .

[10]  Harish Patil,et al.  Pin: building customized program analysis tools with dynamic instrumentation , 2005, PLDI '05.

[11]  Horatiu Jula,et al.  Deadlock Immunity: Enabling Systems to Defend Against Deadlocks , 2008, OSDI.

[12]  Koushik Sen,et al.  DART: directed automated random testing , 2005, PLDI '05.

[13]  Galen C. Hunt,et al.  Debugging in the (very) large: ten years of implementation and experience , 2009, SOSP '09.

[14]  James C. King,et al.  A new approach to program testing , 1974, Programming Methodology.

[15]  Neetu Singh,et al.  Operating System , 2021, Essential Computer Science.

[16]  Dawson R. Engler,et al.  A few billion lines of code later , 2010, Commun. ACM.

[17]  Ricardo Bianchini,et al.  Striking a new balance between program instrumentation and debugging time , 2011, EuroSys '11.

[18]  Samuel T. King,et al.  ReVirt: enabling intrusion analysis through virtual-machine logging and replay , 2002, OPSR.

[19]  Michael I. Jordan,et al.  Scalable statistical bug isolation , 2005, PLDI '05.

[20]  George Candea,et al.  S2E: a platform for in-vivo multi-path analysis of software systems , 2011, ASPLOS XVI.