A Process to Effectively Identify "Guilty" Performance Antipatterns

The problem of interpreting the results of software performance analysis is very critical. Software developers expect feedbacks in terms of architectural design alternatives (e.g., split a software component in two components and re-deploy one of them), whereas the results of performance analysis are either pure numbers (e.g. mean values) or functions (e.g. probability distributions). Support to the interpretation of such results that helps to fill the gap between numbers/functions and software alternatives is still lacking. Performance antipatterns can play a key role in the search of performance problems and in the formulation of their solutions. In this paper we tackle the problem of identifying, among a set of detected performance antipatterns, the ones that are the real causes of problems (i.e. the “guilty” ones). To this goal we introduce a process to elaborate the performance analysis results and to score performance requirements, model entities and performance antipatterns. The cross observation of such scores allows to classify the level of guiltiness of each antipattern. An example modeled in Palladio is provided to demonstrate the validity of our approach by comparing the performance improvements obtained after removal of differently scored antipatterns.

[1]  Jing Xu,et al.  Rule-based automatic software performance diagnosis and improvement , 2008, WOSP '08.

[2]  Vittorio Cortellessa,et al.  Performance Antipatterns as Logical Predicates , 2010, 2010 15th IEEE International Conference on Engineering of Complex Computer Systems.

[3]  Paola Inverardi,et al.  Model-based performance prediction in software development: a survey , 2004, IEEE Transactions on Software Engineering.

[4]  Leonard J. Bass,et al.  Integrating Quality-Attribute Reasoning Frameworks in the ArchE Design Assistant , 2008, QoSA.

[5]  Katinka Wolter,et al.  Formal Methods and Stochastic Models for Performance Evaluation, Fourth European Performance Engineering Workshop, EPEW 2007, Berlin, Germany, September 27-28, 2007, Proceedings , 2007, European Performance Engineering Workshop.

[6]  Connie U. Smith,et al.  More New Software Antipatterns: Even More Ways to Shoot Yourself in the Foot , 2003, Int. CMG Conference.

[7]  Steffen Becker,et al.  The Palladio component model for model-driven performance prediction , 2009, J. Syst. Softw..

[8]  Vittorio Cortellessa,et al.  Approaching the Model-Driven Generation of Feedback to Remove Software Performance Flaws , 2009, 2009 35th Euromicro Conference on Software Engineering and Advanced Applications.

[9]  John Murphy,et al.  Detecting Performance Antipatterns in Component Based Enterprise Systems , 2008, J. Object Technol..

[10]  Raj Jain,et al.  The art of computer systems performance analysis - techniques for experimental design, measurement, simulation, and modeling , 1991, Wiley professional computing.

[11]  Steffen Becker,et al.  Automatically improve software architecture models for performance, reliability, and cost using evolutionary algorithms , 2010, WOSP/SIPEW '10.

[12]  Vittorio Cortellessa,et al.  A Framework for Automated Generation of Architectural Feedback from Software Performance Analysis , 2007, EPEW.

[13]  C. U. Smith More New Software Performance Antipatterns : Even More Ways to Shoot Yourself in the Foot , 2000 .

[14]  Thomas J. Mowbray,et al.  AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis , 1998 .