Using simulation to evaluate prediction techniques [for software]

The need for accurate software prediction systems increases as software becomes larger and more complex. A variety of techniques have been proposed, but none has proved consistently accurate. The underlying characteristics of the data set influence the choice of the prediction system to be used. It has proved difficult to obtain significant results over small data sets; consequently, we required large validation data sets. Moreover, we wished to control the characteristics of such data sets in order to systematically explore the relationship between accuracy, choice of prediction system and data set characteristics. Our solution has been to simulate data, allowing both control and the possibility of large validation cases. We compared regression, rule induction and nearest neighbours (a form of case-based reasoning). The results suggest that there are significant differences depending upon the characteristics of the data set. Consequently, researchers should consider the prediction context when evaluating competing prediction systems. We also observed that the more "messy" the data and the more complex the relationship with the dependent variable, the more variability in the results. This became apparent since we sampled two different training sets from each simulated population of data. In the more complex cases, we observed significantly different results depending upon the training set. This suggests that researchers will need to exercise caution when comparing different approaches and utilise procedures such as bootstrapping in order to generate multiple samples for training purposes.

[1]  Lionel C. Briand,et al.  Using the European Space Agency data set: a replicated assessment and comparison of common software , 2000 .

[2]  Ingunn Myrtveit,et al.  A Controlled Experiment to Assess the Benefits of Estimating with Analogy and Regression Models , 1999, IEEE Trans. Software Eng..

[3]  Barbara Kitchenham,et al.  The MERMAID Approach to software cost estimation , 1990 .

[4]  Alberto Maria Segre,et al.  Programs for Machine Learning , 1994 .

[5]  Barbara A. Kitchenham,et al.  An investigation of analysis techniques for software datasets , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[6]  J. Neter,et al.  Applied linear statistical models : regression, analysis of variance, and experimental designs , 1974 .

[7]  Stephen G. MacDonell Metrics for database systems: an empirical study , 1997, Proceedings Fourth International Software Metrics Symposium.

[8]  John Jenkins,et al.  Cost-estimation by analogy as a good management practice , 1988 .

[9]  Ronald Gulezian Reformulating and calibrating COCOMO , 1991, J. Syst. Softw..

[10]  Hans van Vliet,et al.  Predicting maintenance effort with function points , 1997, 1997 Proceedings International Conference on Software Maintenance.

[11]  Marija J. Norusis,et al.  SPSS for Windows Base System User''s Guide , 1992 .

[12]  J. Ross Quinlan,et al.  Learning decision tree classifiers , 1996, CSUR.

[13]  James Miller Can results from software engineering experiments be safely combined? , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[14]  Adam A. Porter,et al.  Learning from Examples: Generation and Evaluation of Decision Trees for Software Resource Analysis , 1988, IEEE Trans. Software Eng..

[15]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[16]  Barry W. Boehm,et al.  Software Engineering Economics , 1993, IEEE Transactions on Software Engineering.

[17]  Martin J. Shepperd,et al.  Estimating Software Project Effort Using Analogies , 1997, IEEE Trans. Software Eng..

[18]  Michael J. Prietula,et al.  Examining the Feasibility of a Case-Based Reasoning Model for Software Effort Estimation , 1992, MIS Q..

[19]  Mauricio Amaral de Almeida,et al.  An investigation on the use of machine learned models for estimating correction costs , 1998, Proceedings of the 20th International Conference on Software Engineering.

[20]  Barbara A. Kitchenham,et al.  Combining empirical results in software engineering , 1998, Inf. Softw. Technol..