KAMP: Karlsruhe Architectural Maintainability Prediction

In their lifetime software systems usually need to be adapted in order to fit in a changing environment or to cover new required functionality. The effort necessary for implementing changes is related to the maintainability of the software system. Therefore, maintainability is an important quality aspect of software systems. Today Software Architecture plays an important role in achieving software quality goals. Therefore, it is useful to evaluate software architectures regarding their impact on the quality of the program. However, unlike other quality attributes, such as performance or reliability, there is relatively less work on the impact of the software architecture on maintainability in a quantitative manner. In particular, the cost of software evolution not only stems from software-development activities, such as reimplementation, but also from software management activities, such as re-deployment, upgrade installation, etc. Most metrics for software maintainability base on code of object-oriented designs, but not on architectures, and do not consider costs from software management activities. Likewise, existing current architectural maintainability evaluation techniques manually yield just qualitative (and often subjective) results and also do concentrate on software (re-)development costs. In this paper, we present KAMP, the Karlsruhe Architectural Maintainability Prediction Method, a quantitative approach to evaluate the maintainability of software architectures. Our approach estimates the costs of change requests for a given architecture and takes into account re-implementation costs as well as re-deployment and upgrade activities. We combine several strengths of existing approaches. First, our method evaluates maintainability for concrete change requests and makes use of explicit architecture models. Second, it estimates change efforts using semi-automatic derivation of work plans, bottom-up effort estimation, and guidance in investigation of estimation supports (e.g. design and code properties, team organization, development environment, and other influence factors).

[1]  Steffen Becker,et al.  Model-Based performance prediction with the palladio component model , 2007, WOSP '07.

[2]  Liming Zhu,et al.  A framework for classifying and comparing software architecture evaluation methods , 2004, 2004 Australian Software Engineering Conference. Proceedings..

[3]  P. Kam,et al.  : 4 , 1898, You Can Cross the Massacre on Foot.

[4]  Ellis Horowitz,et al.  Software Cost Estimation with COCOMO II , 2000 .

[5]  Said Ilias Function point counting practices manual , 2000 .

[6]  Jan Bosch,et al.  Architecture-level modifiability analysis (ALMA) , 2004, J. Syst. Softw..

[7]  Mikael Lindvall,et al.  Evaluating software architectures , 2004, Adv. Comput..

[8]  Eila Niemelä,et al.  A Survey on Software Architecture Analysis Methods , 2002, IEEE Trans. Software Eng..

[9]  Jan Bosch,et al.  Architecture level prediction of software maintenance , 1999, Proceedings of the Third European Conference on Software Maintenance and Reengineering (Cat. No. PR00090).

[10]  I. G. BONNER CLAPPISON Editor , 1960, The Electric Power Engineering Handbook - Five Volume Set.

[11]  John J. Marciniak,et al.  Encyclopedia of Software Engineering , 1994, Encyclopedia of Software Engineering.

[12]  Birgit Penzenstadler,et al.  Designing, Documenting, and Evaluating Software Architecture , 2008 .