In his paper, Mingers sets the scene with some general comments about the development of management science and the relevance of the subject he addresses. He then describes the characteristics of eleven methods designed to support OR/Management Science interventions (Table 1 in his paper). The strength of support that each method offers to different aspects of an intervention is then indicated on a matrix (Figure 2 in his paper) constructed to help practitioners choose and mix methods in an intervention. The overall result is, I think, best described as a characterization of the methods rather than a classification. Nevertheless, a framework for characterization is potentially useful and it is now 20 years since Jackson and Keys (see also Jackson, and Flood and Jackson) published their System of Systems Methodologies (SOSM). On comparing the papers what is striking is that most of the 11 methods chosen by Mingers in 2003 existed at the time of the 1984 analysis. This would seem to contradict the first sentence of the paper, which refers to the burgeoning of many different methods and methodologies in recent years. The first paragraph of the paper continues by stating that a range of methods and methodologies are ‘routinely’ employed together in the same intervention, referring to a survey by Munro and Mingers and citing a number of case studies. This might be the case for a particular group of academics, but as Munro and Mingers have shown, on further examination of their data, the range of methods deployed and combined by OR practitioners is restricted mainly to the more traditional OR techniques such as simulation, statistical analysis, mathematical modelling and forecasting. The practitioners not only use a less diverse set of methods, but they also are less inclined to use them in combination than the sample as whole. The analysis in the paper starts by deriving seven headings to be used to characterize a selected set of methods/ techniques. The results are presented in the paper in Table 1. The paper states that those methods included, ‘were felt to cover the main, and most commonly used types.’ However, the original Munro and Mingers survey discussed above contradicts this statement. The two most frequently used methods, statistical analysis and forecasting, are not characterized in the paper, whereas some of the least used, hypergames, SAST and CSH, are. Moreover, when the further analysis of the Munro and Mingers data is examined for the responses of non-academic practitioners (ie excluding the academics) the contradiction is even starker: cognitive mapping/SODA, interactive planning, SAST, hypergames and CSH were not evaluated by any of the respondents, causing Munro and Mingers to conclude that they had never used them. It is difficult to understand why, in a guide for practitioners, the seldom-used soft methods should be favoured both over the much-used hard methods such as statistics and forecasting and over other methods which practitioners apparently do sometimes use such as decision analysis or scenarios. The rather eccentric choice of examples does not, of course, invalidate the general approach. This proceeds by ‘mapping’ the methods onto a framework in the form of a four by two matrix shown in the paper in Figure 2. The mapping of the methods identified in Table 1 onto this matrix lies at the heart of the approach. It could potentially provide some insight and a basis for debate. However, in my view the approach is not well served by its application in the paper. First, the paper is bedeviled by unfortunate typographical errors (for instance on page 567 my name is spelt wrongly and two of the references to my work contain errors). In particular, there are two errors in the key figure, Figure 2. The analysis-material cell should read ‘underlying causal structure’ and the action-material cell should presumably read ‘select and implement best alternatives.’ These are not important in themselves but they are then copied 11 times in Figures 3–5. Second, I am puzzled by the choice of words in each cell. These presumably are just meant to be indicative, but I could not work out why in the social row one should try to appreciate ‘social practices, power relations’ but in the analysis phase one should concentrate on ‘distortions, conflicts, interests’. Why not the other way round? Why such a ‘critical’ (in the philosophical sense) emphasis when most of the methods being classified are not ‘critical’? Why not include the more positive (in the non-philosophical sense) aspects of social activity such as group formation, norms, culture, leadership and decision taking. In the action column I could not see why the social aspects should be depicted as ‘generate empowerment and enlightenment’, which sounds personal, while the personal aspects are described as ‘generate accommodation and consensus’, which sounds social. Journal of the Operational Research Society (2005) 56, 463–474 r 2005 Operational Research Society Ltd. All rights reserved. 0160-5682/05 $30.00
[1]
Edward Lumsdaine,et al.
Creative problem solving
,
1995
.
[2]
Michael Jackson,et al.
Beyond a System of Systems Methodologies
,
1990
.
[3]
Michael Jackson,et al.
Towards a System of Systems Methodologies
,
1984
.
[4]
L. Allbon,et al.
Creative Problem Solving
,
1968,
Canadian journal of occupational therapy. Revue canadienne d'ergotherapie.
[5]
John Mingers,et al.
The use of multimethodology in practice—results of a survey of practitioners
,
2002,
J. Oper. Res. Soc..
[6]
John Mingers,et al.
A classification of the philosophical assumptions of management science methods
,
2003,
J. Oper. Res. Soc..
[7]
John Mingers,et al.
Response to Richard Ormerod
,
2004,
J. Oper. Res. Soc..