Developing an Embedded Model for Test suite prioritization process to optimize consistency rules for inconsistencies detection and model changes

Software form typically contains a lot of contradiction and uniformity checkers help engineers find them. Even if engineers are willing to tolerate inconsistencies, they are better off knowing about their existence to avoid follow-on errors and unnecessary rework. However, current approaches do not detect or track inconsistencies fast enough. This paper presents an automated approach for detecting and tracking inconsistencies in real time (while the model changes). Engineers only need to define consistency rules-in any language-and our approach automatically identifies how model changes affect these consistency rules. It does this by observing the behavior of consistency rules to understand how they affect the model. The approach is quick, correct, scalable, fully automated, and easy to use as it does not require any special skills from the engineers using it. We use this model to define generic prioritization criteria that are applicable to GUI, Web applications and Embedded Model. We evolve the model and use it to develop a unified theory. Within the context of this model, we develop and empirically evaluate several prioritization criteria and apply them to four standalone GUI and three Web-based applications, their existing test suites and mainly embedded systems. In this model we only run our data collection and test suite prioritization process on seven programs and their existing test suites. An experiment that would be more readily generalized would include multiple programs of different sizes and from different domains. We may conduct additional empirical studies with larger EDS to address this threat each test case has a uniform cost of running (processor time) monitoring (human time); these assumptions may not hold in practice. Second, we assume that each fault contributes uniformly to the overall cost, which again may not hold in practice. GJCST-C Classification : D.2.5 Developing an Embedded Model for Test suite prioritization process to optimize consistency rules for inconsistencies detection and model changes Strictly as per the compliance and regulations of: Developing an Embedded Model for Test Suite Prioritization Process to Optimize Consistency Rules for Inconsistencies Detection and Model Changes Developing an Embedded Model for Test Suite Prioritization Process to Optimize Consistency Rules for Inconsistencies Detection and Model Changes Muzammil H Mohammed α & Sultan Aljahdali σ Abstract Software form typically contains a lot of Software form typically contains a lot of contradiction and uniformity checkers help engineers find them. Even if engineers are willing to tolerate inconsistencies, they are better off knowing about their existence to avoid follow-on errors and unnecessary rework. However, current approaches do not detect or track inconsistencies fast enough. This paper presents an automated approach for detecting and tracking inconsistencies in real time (while the model changes). Engineers only need to define consistency rules in any language and our approach automatically identifies how model changes affect these consistency rules. It does this by observing the behavior of consistency rules to understand how they affect the model. The approach is quick, correct, scalable, fully automated, and easy to use as it does not require any special skills from the engineers using it. We use this model to define generic prioritization criteria that are applicable to GUI, Web applications and Embedded Model. We evolve the model and use it to develop a unified theory. Within the context of this model, we develop and empirically evaluate several prioritization criteria and apply them to four stand-alone GUI and three Web-based applications, their existing test suites and mainly embedded systems. In this model we only run our data collection and test suite prioritization process on seven programs and their existing test suites. An experiment that would be more readily generalized would include multiple programs of different sizes and from different domains. We may conduct additional empirical studies with larger EDS to address this threat each test case has a uniform cost of running (processor time) monitoring (human time); these assumptions may not hold in practice. Second, we assume that each fault contributes uniformly to the overall cost, which again may not hold in practice.

[1]  Lionel C. Briand,et al.  Impact analysis and change management of UML models , 2003, International Conference on Software Maintenance, 2003. ICSM 2003. Proceedings..

[2]  Alexander Egyed Automated abstraction of class diagrams , 2002, TSEM.

[3]  Betty H. C. Cheng,et al.  A graphical environment for formally developing object-oriented software , 1994, Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94.

[4]  Alexander Egyed,et al.  Incremental Consistency Checking of Dynamic Constraints , 2010, FASE.

[5]  Mikael Lindvall,et al.  Practical implications of traceability , 1996 .

[6]  David Notkin,et al.  Gandalf: Software development environments , 1986, IEEE Transactions on Software Engineering.

[7]  Steve M. Easterbrook,et al.  Using ViewPoints for inconsistency management , 1996, Softw. Eng. J..

[8]  Alexander Egyed,et al.  Generating and Evaluating Choices for Fixing Inconsistencies in UML Design Models , 2008, 2008 23rd IEEE/ACM International Conference on Automated Software Engineering.

[9]  John C. Grundy,et al.  Inconsistency Management for Multiple-View Software Development Environments , 1998, IEEE Trans. Software Eng..

[10]  A. Jefferson Offutt,et al.  Algorithmic analysis of the impacts of changes to object-oriented software , 2000, Proceedings. 34th International Conference on Technology of Object-Oriented Languages and Systems - TOOLS 34.

[11]  Licia Capra,et al.  xlinkit: a consistency checking and smart link generation service , 2002, TOIT.

[12]  Gail E. Kaiser,et al.  Incremental attribute evaluation in distributed language-based environments , 1986, PODC '86.

[13]  Alexander Egyed,et al.  Fixing Inconsistencies in UML Design Models , 2007, 29th International Conference on Software Engineering (ICSE'07).

[14]  Alexander Egyed,et al.  Instant consistency checking for the UML , 2006, ICSE.

[15]  Wolfgang Emmerich,et al.  Consistency management with repair actions , 2003, 25th International Conference on Software Engineering, 2003. Proceedings..

[16]  Betty H. C. Cheng,et al.  Automatically Detecting and Visualising Errors in UML Diagrams , 2002, Requirements Engineering.

[17]  Paul Grünbacher,et al.  DOPLER: An Adaptable Tool Suite for Product Line Engineering , 2007, SPLC.

[18]  Tom Mens,et al.  Detecting model inconsistency through operation-based model construction , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[19]  Alexander Egyed,et al.  Integrating COTS Software into Systems through Instrumentation and Reasoning , 2006, Automated Software Engineering.

[20]  Umut A. Acar,et al.  Imperative self-adjusting computation , 2008, POPL '08.

[21]  D. Gabbay,et al.  Inconsistency Handling in Multiperspective Specifications , 1994 .

[22]  Alan K. Mackworth Consistency in Networks of Relations , 1977, Artif. Intell..

[23]  Charles L. Forgy,et al.  Rete: a fast algorithm for the many pattern/many object pattern match problem , 1991 .