One in a baker's dozen: debugging debugging

In the work of Voas (1993), they outlined 13 major software engineering issues needing further research: (1) what is software quality? (2) what are the economic benefits behind existing software engineering techniques?, (3) does process improvement matter?, (4) can you trust software metrics and measurement?, (5) why are software engineering standards confusing and hard to comply with, (6) are standards interoperable, (7) how to decommission software?, (8) where are reasonable testing and debugging stoppage criteria?, (9) why are COTS components so difficult to compose?, (10) why are reliability measurement and operational profile elicitation viewed suspiciously, (11) can we design in the "ilities" both technically and economically, (12) how do we handle the liability issues surrounding certification, and (13) is intelligence and autonomic computing feasible? This paper focuses on a simple and easy to understand metric that addresses the eighth issue, a testing and debugging testing stoppage criteria based on expected probability of failure graphs.

[1]  Edward N. Adams,et al.  Optimizing Preventive Service of Software Products , 1984, IBM J. Res. Dev..

[2]  Kinji Mori,et al.  Autonomous decentralized systems: Concept, data field architecture and future trends , 1993, Proceedings ISAD 93: International Symposium on Autonomous Decentralized Systems.

[3]  John D. Musa,et al.  Operational profiles in software-reliability engineering , 1993, IEEE Software.

[4]  Kinji Mori,et al.  High-speed processing in wired-and-wireless integrated autonomous decentralized system and its application to IC card ticket system , 2006, Third IEEE International Workshop on Engineering of Autonomic & Autonomous Systems (EASE'06).

[5]  Kinji Mori,et al.  Research of Reliability Technology in Heterogeneous Autonomous Decentralized Assurance Systems , 2007, Eighth International Symposium on Autonomous Decentralized Systems (ISADS'07).

[6]  Watts S. Humphrey,et al.  A discipline for software engineering , 2012, Series in software engineering.

[7]  Jeffrey M. Voas,et al.  Software test cases: is one ever enough? , 2006, IT Professional.

[8]  Mitsuru Ohba,et al.  Does imperfect debugging affect software reliability growth? , 1989, ICSE '89.

[9]  Kent L. Beck,et al.  Test-driven Development - by example , 2002, The Addison-Wesley signature series.

[10]  Dale Skeen,et al.  Nonblocking commit protocols , 1981, SIGMOD '81.

[11]  Boris Beizer,et al.  Software Testing Techniques , 1983 .

[12]  Hong Zhu,et al.  Generating Structurally Complex Test Cases By Data Mutation: A Case Study Of Testing An Automated Modelling Tool , 2009, Comput. J..

[13]  Jeffrey M. Voas A Baker's Dozen: 13 Software Engineering Challenges , 2007, IT Professional.

[14]  Jeffrey M. Voas,et al.  Faults on its sleeve: amplifying software reliability testing , 1993, ISSTA '93.

[15]  Brian Marick,et al.  The craft of software testing , 1994 .

[16]  Mary Shaw,et al.  Empirical evaluation of defect projection models for widely-deployed production software systems , 2004, SIGSOFT '04/FSE-12.

[17]  Tadashi Dohi,et al.  On the Effect of Fault Removal in Software Testing - Bayesian Reliability Estimation Approach , 2006, 2006 17th International Symposium on Software Reliability Engineering.

[18]  Michael Stonebraker,et al.  A Formal Model of Crash Recovery in a Distributed System , 1983, IEEE Transactions on Software Engineering.