Conceptual simplicity meets organizational complexity: case study of a corporate metrics program

A corporate-wide metrics program faces enormous and poorly understood challenges as its implementation spreads out from the centralized planning body across many organizational boundaries into the sites where the data collection actually occurs. This paper presents a case study of the implementation of one corporate-wide program, focusing particularly on the unexpected difficulties of collecting a small number of straightforward metrics. Several mechanisms causing these difficulties are identified, including attenuated communication across organizational boundaries, inertia created by existing data collection systems, and the perceptions, expectations, and fears about how the data will be used. We describe how these factors influence the interpretation of the definitions of the measurements and influence the degree of conformance that is actually achieved. We conclude with lessons learned about both content and mechanisms to help in navigating the tricky waters of organizational dynamics in implementing a company-wide program.

[1]  D. Ross Jeffery,et al.  A framework for evaluation and prediction of metrics program success , 1993, [1993] Proceedings First International Software Metrics Symposium.

[2]  Robert B. Grady,et al.  Practical Software Metrics for Project Management and Process Improvement , 1992 .

[3]  Michael J. Flaherty,et al.  Review of Practical software metrics for project management and process improvement by Robert B. Grady, Prentice Hall, Englewood Cliffs 1992 , 1993 .

[4]  Ray Offen,et al.  Establishing Software Measurement Programs , 1997, IEEE Softw..

[5]  Robert B. Grady,et al.  Software Metrics: Establishing a Company-Wide Program , 1987 .

[6]  Shari Lawrence Pfleeger,et al.  Lessons learned in building a corporate metrics program , 1993, IEEE Software.

[7]  Geoffrey C. Bowker,et al.  Situations vs. standards in long-term, wide-scale decision-making: the case of the International Classification of Diseases , 1991, Proceedings of the Twenty-Fourth Annual Hawaii International Conference on System Sciences.

[8]  Victor R. Basili,et al.  A Methodology for Collecting Valid Software Engineering Data , 1984, IEEE Transactions on Software Engineering.

[9]  R. Kaplan,et al.  The balanced scorecard--measures that drive performance. , 2015, Harvard business review.

[10]  R BasiliVictor,et al.  A Methodology for Collecting Valid Software Engineering Data , 1984 .

[11]  Robert B. Grady,et al.  Successfully applying software metrics , 1994, Computer.

[12]  Norman E. Fenton,et al.  Implementing Effective Software Metrics Programs , 1997, IEEE Softw..

[13]  Bill Curtis,et al.  A field study of the software design process for large systems , 1988, CACM.

[14]  P. Senge THE FIFTH DISCIPLINE , 1997 .