Methods and Tools for Focusing and Prioritizing the Testing Effort
暂无分享,去创建一个
Software testing is widely recognized as an essential part of any software development process, representing however an extremely expensive activity. The overall cost of testing has been estimated at being at least half of the entire development cost, if not more. Despite its importance, however, recent studies showed that developers rarely test their application and most programming sessions end without any test execution. Indeed, new methods and tools able to better allocating the developers effort are needed in order to increment the system reliability and to reduce the testing costs. The resources available should be allocated effectively upon the portions of the source code that are more likely to contain bugs. In this thesis we focus on three activities able to prioritize the testing effort, specifically bug prediction, test case prioritization, and detection of code smell able to fix energy issues. Indeed, despite the effort devoted by the research community in the last decades through the conduction of empirical studies and the devising of new approaches led to interesting results, in the context of our research we highlighted some aspects that might be improved and proposed empirical investigations and novel approaches. In the context of bug prediction, we devised two novel metrics, namely the developer’s structural and semantic scattering. These metrics exploit the presence of scattering changes that make developers more error-prone. The results of our the empirical study show the superiority of our model with respect to baselines based on product metrics and process metrics. Afterwards, we devised a “hybrid” model providing an average improvement in terms of prediction accuracy. Besides analyzing on predictors, we proposed a novel adaptive prediction classifier, which dynamically recommends the classifier able to better predict the bug-proneness of a class, based on the structural characteristics of the class. The models based on this classifier are able to outperform models based on stand-alone classifiers, as well as those based on the Validation and Voting ensemble technique in the context of within-project bug prediction. Laterly, we performed a differentiated replication study in the contexts of cross-project and within-project bug prediction. We analyzed the behavior of seven ensemble methods. The results show that the problem is still far from being solved and that the use of ensemble techniques does not provide evident benefits with respect to