Abstract : This is the final report on research intended to investigate the most effective methods for software engineering evaluation. The objective of this work is to identify and evaluate the methods used to measure the impact of changes to the software process. In particular, there is a special interest in the evaluation of benefit improvements when different process models are used. The research has pursued two types of activity. First, evaluation methods used in other disciplines have been reviewed for their utility in software engineering. The long-term goal is to produce a taxonomy of methods with a suggested range of strengths for software engineers. The availability of this unified view would help analysts select the most appropriate evaluation techniques for a given class of task. The second class of activity employed small studies in which evaluation methods could be tested and/or quantifiable concepts could be modeled. Because the research goal is to provide a means to appraise alternative development paradigms, most of the effort was spent on the study of an essential software process model (i.e., a meta-process model) and the evaluation of paradigms that alter the process within the model.
[1]
Bruce I. Blum,et al.
Clinical Information Systems
,
1985,
Springer US.
[2]
Barry W. Boehm,et al.
Prototyping vs. specifying: A multi-project experiment
,
1984,
ICSE '84.
[3]
Bruce I. Blum.
A Paradigm for Developing Information Systems
,
1987,
IEEE Transactions on Software Engineering.
[4]
Barry W. Boehm.
An Experiment in Small-Scale Application Software Engineering
,
1981,
IEEE Transactions on Software Engineering.
[5]
Anas N. Al-Rabadi,et al.
A comparison of modified reconstructability analysis and Ashenhurst‐Curtis decomposition of Boolean functions
,
2004
.
[6]
J. A. Mercer,et al.
The role of logic
,
2021,
The Problem of Plurality of Logics.
[7]
E. E. Grant,et al.
Exploratory experimental studies comparing online and offline programming performance
,
1968,
CACM.