The E-Learning Maturity Model (eMM) provides a means by which institutions can assess and compare their capability to sustainably develop, deploy and support e-learning. The eMM has been successfully piloted and refined in eleven New Zealand tertiary institutions (seven universities and four polytechnics) and used to both guide individual institution's understanding of their e-learning capability as well as providing useful information on the sector as a whole outcomes which can be translated into any other institutional and regional context in order to guide strategic and operational planning and investment. This paper describes the extensively updated set of benchmarks and methodology that constitute the current version of the eMM along with results from further application of the eMM in the UK and New Zealand. Lessons learnt from the perspectives of institutional leadership and quality improvement of e-learning are included. Introduction The E-Learning Maturity Model (Marshall and Mitchell 2006) provides a means by which institutions can assess and compare their capability to sustainably develop, deploy and support e-learning. The eMM is based on the ideas of the Capability Maturity Model (CMM, Paulk et al., 1993) and SPICE (Software Process Improvement and Capability dEtermination, El Emam et al. 1998; SPICE 2002) methodologies. The underlying idea that guides the development of the eMM is that the ability of an institution to be effective in any particular area of work is dependent on their capability to engage in high quality processes that are reproducible and able to be extended and sustained as demand grows. A key aspect of the eMM is that it does not rank institutions, but rather acknowledges the reality that all institutions will have aspects of strength and weakness that can be learnt from and improved. The rapid growth in the technologies being used, the ways that they are being applied across an ever widening group of academic disciplines and the evolving skills and experience of teachers and students means that e-learning is a moving target. Any benchmarking approach that presumes particular e-learning technologies or pedagogies is unlikely to meaningfully assess a range of institutions within a single country, let alone allow for useful international collaboration and comparison, particularly over an extended period of time. As a consequence of the desire for the eMM to support technological and organisational change, the meaning of e-learning implicit in the eMM is broadly defined. At the heart lies the impact of computers and related communication technologies on the range of activities traditionally undertaken by teachers and learners. However, as the eMM is institutionally focused, the model considers the wider implications of the use of digital technology, most particularly the systems and resources needed to ensure that the use of technology by students and teachers is efficient, effective, and can be sustained operationally and strategically. This model has been successfully piloted and refined in New Zealand (Marshall and Mitchell 2005; Marshall 2005, 2006a) and used to both guide individual institution's understanding of their e-learning capability as well as providing useful information on the sector as a whole. Benchmarking elearning capability in this manner is necessary for programme managers to understand where their organization lacks capacity to meet its goals and consequently prioritize investment. Although this paper uses the term “institution” to refer to the organizational unit being assessed, this is not a requirement of the model itself. As is illustrated by the results for the UK institution below, it is possible to conduct multiple eMM assessments within a single institution, thus gaining insights about disciplinary, structural or other organizationally important divisions of the institution. Key Concepts of the eMM Capability Capability is perhaps the most important concept incorporated in the eMM. It describes the ability of an institution to ensure that e-learning design, development and deployment is meeting the needs of the students, staff and institution. Critically, capability includes the ability of an institution to sustain elearning delivery and the support of learning and teaching as demand grows and staff change. As noted by Fullan: “The answer to large-scale reform is not to try to emulate the characteristics of the minority who are getting somewhere under present conditions ... Rather, we must change existing conditions so that it is normal and possible for a majority of people to move forward” (Fullan 2001, page 268) Dimensions of capability A key development that arose from the application and analysis of the first version of the eMM is that the concept of levels reused from the CMM and SPICE was unhelpful in describing the capability of an individual process (Marshall and Mitchell 2006). The use of levels incorrectly implies a hierarchical model of process improvement where capability is assessed and built in a layered and progressive manner. The concept underlying the eMM’s use of dimensions is holistic capability. Rather than the model measuring progressive levels, it describes the capability of a process from the synergistic perspectives of Delivery, Planning, Definition, Management and Optimisation. In thinking about the relationship between the five dimensions it is helpful to consider them arranged as in Figure 1. The row of boxes used on the left to display summaries of process capabilities is helpful when performing comparisons within or between assessments (for example Figure 4) but it can imply a hierarchical relationship that is misleading when interpreting individual process capability results. Figure 1: eMM Process Dimensions Dimension 1 (Delivery) is concerned with the creation and provision of process outcomes. Assessments of this dimension are aimed at determining the extent to which the process is seen to operate within the institution. Dimension 2 (Planning) assesses the use of predefined objectives and plans in conducting the work of the process. The use of predefined plans potentially makes processes more able to be managed effectively and reproduced if successful. Dimension 3 (Definition) covers the use of institutionally defined and documented standards, guidelines, templates and policies during the process implementation. An institution operating effectively within this dimension has clearly defined how a given process should be performed. This does not mean that the staff of the institution follows this guidance. Dimension 4 (Management) is concerned with how the institution manages the process implementation and ensures the quality of the outcomes. Capability within this dimension reflects the measurement and control of process outcomes. Dimension 5 (Optimisation) captures the extent an institution is using formal approaches to improve the activities of the process. Capability of this dimension reflects a culture of continuous improvement. An organisation that has developed capability on all dimensions for all processes will be more capable than one that has not. Strong capability at particular dimensions that is not supported by capability at the other dimensions will not deliver the desired process outcomes. Capability at dimensions one and two that is not supported by capability in the other dimensions will be ad-hoc, unsustainable and unresponsive to changing organisational and learner needs. Capability in dimensions three, four and five that is not complemented with similar strength at dimensions one and two will be unable to meet the process goals and liable to fail. Processes The eMM divides the capability of institutions to sustain and deliver e-learning into five major categories or process areas (Table 1) that indicate clusters of strongly related processes. It should be noted however that all of the processes are interrelated to some degree, particularly through shared practices and the perspectives of the five dimensions. Process category Brief description Learning Processes that directly impact on pedagogical aspects of e-learning Development Processes surrounding the creation and maintenance of e-learning resources Co-ordination Processes surrounding the oversight and management of e-learning Evaluation Processes surrounding the evaluation and quality control of e-learning through its entire lifecycle. Organisation Processes associated with institutional planning and management Table 1: eMM version two process categories (revised from Marshall and Mitchell, 2003) The processes used in version one of the eMM were developed from the ‘Seven Principles’ of Chickering and Gamson (1987) and ‘Quality on the Line’ benchmarks (IHEP 2000) as outlined in Marshall and Mitchell (2004). These had the advantage of being widely accepted as guidelines or benchmarks for e-learning delivery (Sherry 2003), however experience in using them during the initial capability assessment of nine New Zealand institutions reported in Marshall (2005) identified some significant limitations. Applying the recommendations from the evaluation of the first version of the eMM resulted in a reduced set of thirty four processes that were then subjected to further review through a series of workshops conducted in Australia and the UK (Marshall, 2006a). This identified a potential set of three hundred and fifty four possible items (Table 2). Examining the sorted items in Table 2 it is apparent that support issues dominated the concerns of the workshop participants, who included a mix of e-learning experts, practitioners and managers. The desire for an e-learning strategy and plan was repeatedly noted, but a focus on operational concerns was apparent from the absence of items at the higher dimensions. Dimension Total Unique Items Total Items Process Area Delivery Planning Definition Management Optimisation Learning 35 14 4 2 0 55 63 Development 38 24 17 5 0
[1]
P. David Mitchell.
The impact of educational technology: a radical reappraisal of research methods
,
1997
.
[2]
Michael J. Spendolini,et al.
The Benchmarking Book
,
1992
.
[3]
S Marshall.
E-LEARNING PROCESS MATURITY IN THE NEW ZEALAND TERTIARY SECTOR
,
2005
.
[4]
Stephen Marshall,et al.
AN E-LEARNING MATURITY MODEL ?
,
2002
.
[5]
Mark C. Paulk,et al.
Capability Maturity Model
,
1991
.
[6]
Terence Patrick Rout.
The SPICE Approach to Software Process Improvement
,
2001
.
[7]
Stephen Marshall,et al.
Applying SPICE to e-Learning: An e-Learning Maturity Model?
,
2004,
ACE.
[8]
Khaled El Emam,et al.
Spice: The Theory and Practice of Software Process Improvement and Capability Determination
,
1997
.
[9]
Ray Ison.
Applying systems thinking to higher education
,
1999
.
[10]
Krassie Petrova,et al.
Business undergraduates learning online: a one semester snapshot
,
2005
.
[11]
Charlotte Neuhauser.
A MATURITY MODEL: DOES IT PROVIDE A PATH FOR ONLINE COURSE DESIGN?
,
2004
.
[12]
Stephen Marshall,et al.
Potential Indicators of e-Learning Process Capability
,
2003
.