A Value-Based Framework for Software Evolutionary Testing

The fundamental objective in value-based software engineering is to integrate consistent stakeholder value propositions into the full extent of software engineering principles and practices so as to increase the value for software assets. In such a value-based setting, artifacts in software development such as requirement specifications, use cases, test cases, or defects, are not treated as equally important during the development process. Instead, they will be differentiated according to how much they are contributing, directly or indirectly, to the stakeholder value propositions. The higher the contributions, the more important the artifacts become. In turn, development activities involving more important artifacts should be given higher priorities and greater considerations in the development process. In this paper, a value-based framework is proposed for carrying out software evolutionary testing with a focus on test data generation through genetic algorithms. The proposed framework incorporates general principles in value-based software testing and makes it possible to prioritize testing decisions that are rooted in the stakeholder value propositions. It allows for a cost-effective way to fulfill most valuable testing objectives first and a graceful degradation when planned testing process has to be shortened.

[1]  Emilio Soria Olivas,et al.  Handbook of Research on Machine Learning Applications and Trends : Algorithms , Methods , and Techniques , 2009 .

[2]  Gang Xiao,et al.  Software Testing by Active Learning for Commercial Games , 2005, AAAI.

[3]  Tshilidzi Marwala,et al.  Computational Intelligence for Missing Data Imputation, Estimation, and Management - Knowledge Optimization Techniques , 2009, Computational Intelligence for Missing Data Imputation, Estimation, and Management.

[4]  Barry W. Boehm,et al.  Value-Based Software Engineering: Overview and Agenda , 2006, Value-Based Software Engineering.

[5]  Yingxu Wang,et al.  Software Engineering Foundations: A Software Science Perspective , 2007 .

[6]  Phil McMinn,et al.  Search‐based software test data generation: a survey , 2004, Softw. Test. Verification Reliab..

[7]  Du Zhang,et al.  APPLYING MACHINE LEARNING ALGORITHMS IN SOFTWARE DEVELOPMENT , 2000 .

[8]  Mayuram S. Krishnan,et al.  Evaluating the cost of software quality , 1998, CACM.

[9]  Witold Kinsner,et al.  Multi-Fractal Analysis for Feature Extraction from DNA Sequences , 2010, Int. J. Softw. Sci. Comput. Intell..

[10]  Graham King,et al.  Software Engineering Processes: Principles and Applications , 2000 .

[11]  Tetsuo Kinoshita,et al.  Development Support of Learning Agent on Repository-based Agent Framework , 2012, Int. J. Softw. Sci. Comput. Intell..

[12]  Yingxu Wang Breakthroughs in Software Science and Computational Intelligence , 2012 .

[13]  Joachim Wegener,et al.  Testing real-time systems using genetic algorithms , 1997, Software Quality Journal.

[14]  John A. Clark,et al.  Automated test‐data generation for exception conditions , 2000 .

[15]  Silvia Regina Vergilio,et al.  Selection and Evaluation of Test Data Based on Genetic Programming , 2003, Software Quality Journal.

[16]  Stefan Biffl,et al.  Value-Based Management of Software Testing , 2006, Value-Based Software Engineering.

[17]  Gary McGraw,et al.  Generating Software Test Data by Evolution , 2001, IEEE Trans. Software Eng..

[18]  Jeffrey J. P. Tsai,et al.  Machine learning applications in software engineering , 2005 .

[19]  Barry W. Boehm,et al.  How Much Software Quality Investment Is Enough: A Value-Based Approach , 2006, IEEE Software.

[20]  Yingxu Wang,et al.  Formal Models and Cognitive Mechanisms of the Human Sensory System , 2013, Int. J. Softw. Sci. Comput. Intell..

[21]  Fuchun Sun,et al.  Quotient space-based boundary condition for particle swarm optimization algorithm , 2010, 9th IEEE International Conference on Cognitive Informatics (ICCI'10).

[22]  Tshilidzi Marwala,et al.  Committee of Networks for Estimating Missing Data , 2009 .

[23]  Barry Boehm,et al.  Top 10 list [software development] , 2001 .

[24]  Francesco Bergadano,et al.  Testing by means of inductive program learning , 1996, TSEM.

[25]  Yingxu Wang,et al.  Software and Intelligent Sciences: New Transdisciplinary Findings , 2012 .

[26]  Du Zhang,et al.  Machine Learning and Value-Based Software Engineering , 2009, Int. J. Softw. Sci. Comput. Intell..

[27]  Joachim Wegener,et al.  Evolutionary test environment for automatic structural testing , 2001, Inf. Softw. Technol..

[28]  Guohui Zhang,et al.  An Efficient Memetic Algorithm for Dynamic Flexible Job Shop Scheduling with Random Job Arrivals , 2013, Int. J. Softw. Sci. Comput. Intell..

[29]  Hitoshi Ogawa,et al.  Perceiving the Social: A Multi-Agent System to Support Human Navigation in Foreign Communities , 2010, Int. J. Softw. Sci. Comput. Intell..

[30]  Barry W. Boehm,et al.  The ROI of software dependability: The iDAVE model , 2004, IEEE Software.

[31]  Boris Beizer,et al.  Software testing techniques (2. ed.) , 1990 .

[32]  Jeffrey J. P. Tsai,et al.  Machine Learning and Software Engineering , 2004, Software Quality Journal.

[33]  Mario Jino,et al.  Automatic Test Data Generation for Program Paths Using Genetic Algorithms , 2002, Int. J. Softw. Eng. Knowl. Eng..

[34]  Stefan Biffl,et al.  Stakeholder Value Proposition Elicitation and Reconciliation , 2006, Value-Based Software Engineering.

[35]  Jia Zhang,et al.  Supporting CSCW and CSCL with Intelligent Social Grouping Services , 2009, Int. J. Softw. Sci. Comput. Intell..

[36]  Elaine J. Weyuker,et al.  Assessing Test Data Adequacy through Program Inference , 2019, TOPL.

[37]  Hong Zhu,et al.  Software unit test coverage and adequacy , 1997, ACM Comput. Surv..