Agents learn from human experts: An approach to test reconfigurable systems

Faulty software is costly and possibly life threatening as software products permeate our daily life. Therefore, the test process formulates an indispensable component of the development cycle; yet it is a formidable task. In an effort to alleviate its challenges, this contribution outlines a novel approach to enrich traditional test techniques with intuition-based test strategies learned by observing skilled human testers during various test sessions. Consequently, the strategies learned would be verified, combined, and generalized to be further applied in similar test situations. Hence, a reasonable portion of the workload done by human testers would be shifted to the test system itself. This leads to a significant reduction in the development time and cost; yet the test efficiency is not sacrificed.

[1]  Y. Levendel Using Untampered Metrics to Decide When to Stop Testing Software , 1991, TENCON '91. Region 10 International Conference on EC3-Energy, Computer, Communication and Control Systems.

[2]  James Bach A Framework for Good Enough Testing , 1998, Computer.

[3]  Michael Goldstein,et al.  Bayesian Graphical Models for Software Testing , 2002, IEEE Trans. Software Eng..

[4]  Fabian Vargas,et al.  Improving Reconfigurable Systems Reliability by Combining Periodical Test and Redundancy Techniques: A Case Study , 2001, J. Electron. Test..

[5]  Rishabh Gupta,et al.  Generating a Test Strategy with Bayesian Networks and Common Sense , 2006, Testing: Academic & Industrial Conference - Practice And Research Techniques (TAIC PART'06).

[6]  Asem Eltaher Towards Good Enough Testing: A Cognitive-Oriented Approach Applied to Infotainment Systems , 2008, 2008 23rd IEEE/ACM International Conference on Automated Software Engineering.

[7]  Cem Kaner,et al.  The Impossibility of Complete Testing , 1998 .

[8]  Dirk Söffker Interaction of intelligent and autonomous systems – part I: qualitative structuring of interaction , 2008 .

[9]  Rui Xu,et al.  Survey of clustering algorithms , 2005, IEEE Transactions on Neural Networks.

[10]  Wlodzislaw Duch,et al.  A new methodology of extraction, optimization and application of crisp and fuzzy logical rules , 2001, IEEE Trans. Neural Networks.

[11]  Michael R. Lyu,et al.  Optimal release time for software systems considering cost, testing-effort, and test efficiency , 2005, IEEE Transactions on Reliability.

[12]  Mats Per Erik Heimdahl,et al.  Test-suite reduction for model based tests: effects on test quality and implications for testing , 2004, Proceedings. 19th International Conference on Automated Software Engineering, 2004..

[13]  Giovanni Denaro,et al.  Self-Test Components for Highly Reconfigurable Systems , 2003, TACoS.

[14]  Jean-Marc Jézéquel,et al.  Automatic test case optimization: a bacteriologic algorithm , 2005, IEEE Software.

[15]  Lotfi A. Zadeh,et al.  Outline of a New Approach to the Analysis of Complex Systems and Decision Processes , 1973, IEEE Trans. Syst. Man Cybern..

[16]  Anders Hessel,et al.  Model-based test case selection and generation for real-time systems , 2006 .

[17]  M. Maurer,et al.  A generic architecture for hybrid intelligent test systems , 2008, 2008 7th IEEE International Conference on Cybernetic Intelligent Systems.

[18]  C. Alippi,et al.  Classification methods and inductive learning rules: what we may learn from theory , 2006, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[19]  Jürgen Dingel,et al.  A comparative survey of scenario-based to state-based model synthesis approaches , 2006, SCESM '06.

[20]  Bart Baesens,et al.  Minerva: Sequential Covering for Rule Extraction , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).