Task formulation eLSE prescribes firstly identifying a number of analysis dimensions specific of the application domain. For each dimension, general usability principles are broken down into finer-grained quality criteria (ISO 9241, 1998) suited to address e-learning issues. By considering the literature on e-learning, results of users studies, and the experience of usability experts, a number of specific guidelines have been identified and associated to these criteria, to be taken into account during the initial design phase. Then, a set of Abstract Tasks addressing these guidelines is identified. An Abstract Task (AT) is an evaluation pattern, which make possible to maximize the reuse of the evaluator’s expertise. Its goal is to capture usability inspection expertise, and to express it in a precise and understandable form, so that it can be easily “reproduced”, communicated, and exploited. The term “abstract” is used since: i) the activities specifications are formulated independently of the particular application, and ii) they refer to categories of application constituents, more than to specific constituents. ATs are formulated following a specific template, which includes five items: − AT Classification Code and Title: they univocally identify the AT, and succinctly convey its essence. − Focus of Action: it shortly describes the context, or focus, of the AT, by listing the application components that are the evaluation entities. − Intent: it describes the problem addressed by the AT and its rationale, trying to make clear which is the specific goal to be achieved through the AT application. − Activity Description: it describes in detail the activities to be performed during the AT application. − Output: it describes the output of the fragment of the inspection the AT refers to. Optionally, a comment is provided, with the aim of indicating further ATs to be applied in combination, or when available, significant examples of inspection findings should be reported, to better clarify which situations the evaluators should look for while applying the AT activity. Our approach aims at evaluating both elearning platform and educational modules. The e-learning platform is the software environment that usually offers a number of integrated tools and services for teaching, learning, communicating, and managing learning material. The educational modules, also called Learning Objects, are the specific learning material provided through the platform. ATs defined for the platform differ from those ones defined for e-learning modules, since different features need to be considered (Ardito et al., 2006, Lanzilotti, 2006). The ATs are organized in two groups: ATs for evaluating the platform (the container) and ATs for evaluating the educational module (the content). Each group is further divided in categories. Such a categorization helps the evaluators to easily identify the ATs that address the evaluation aspects they are interested in. 4.2 The execution phase Execution phase activities are carried out every time an e-learning system must be evaluated. They include two major jobs: a systematic inspection and a user-based evaluation. The systematic inspection is a mandatory activity which is executed first. It produces a list of problems, such as design incompleteness, inconsistency, and irregularity. Oftentimes, inspection results are “obvious” flaws, which require obvious fixing. In some cases, however, some results may need major confirmation with respect to user semantics. In these cases, user-based evaluation sessions are conducted. The last activity in the execution phase is the evaluation feedback, which follows the systematic inspection and the user testing (when conducted). Systematic Inspection Systematic inspection is performed by evaluators. During the inspection, the evaluator uses the ATs to perform a rigorous and systematic analysis and produces a report in which the discovered problems are described, as suggested in the AT. The list of ATs provides a systematic guidance to the evaluator on how to inspect an application. Most evaluators are very good in analysing certain features of interactive applications; however, they often neglect some other features, strictly dependent on the specific application category. Exploiting a set of ATs ready for use allows evaluators with limited experience in a particular domain to perform a more accurate evaluation. User-based evaluation In eLSE, user-based evaluation is conducted, whenever necessary, to validate the inspection findings with real users. The most peculiar activity, with respect to the traditional approaches, is the definition of Concrete Tasks (CTs for short), which describe the activities that users are required to perform during the test. CTs derive from the activity description of the ATs and from the results of inspection. Since the AT activity description is a formulisation of the user tasks, starting from this it is immediately possible to formulate experimental tasks which can guide users in the critical situations encountered by the evaluators during inspection. CTs are therefore conceived as a means of actually verifying the impact, upon the users, of the specific points of the application that are supposed to be critical for e-learning quality. In this sense, they make user-based evaluation better focused, so optimizing exploitation of the users resources and helping to obtain a more precise feedback for designers. During evaluation execution, a sample of users is observed while they are executing CTs and relevant data are collected (users’ actions, users’ errors, time for executing actions, etc.). The outcome of this is therefore a collection of raw data. In the result summary, these data are coded and organized in a synthetic manner and then analyzed.
[1]
Laurie P. Dringus.
An Iterative Usability Evaluation Procedure for Interactive Online Courses
,
1995
.
[2]
Elliot Soloway,et al.
Learning theory in practice: case studies of learner-centered design
,
1996,
CHI.
[3]
Jennifer Preece,et al.
Predicting quality in educational software: Evaluating For Learning, Usability and the Synergy between Them
,
1999,
Interact. Comput..
[4]
Teresa Roselli,et al.
An approach to usability evaluation of e-learning applications
,
2006,
Universal Access in the Information Society.
[5]
Richard A. Schwier,et al.
Interactive Multimedia Instruction
,
1993
.
[6]
Mark Notess,et al.
Usability, user experience, and learner experience
,
2001,
ELERN.
[7]
Mei Wang,et al.
Evaluating the usability of Web-based learning tools
,
2002,
J. Educ. Technol. Soc..
[8]
Jakob Nielsen,et al.
Usability engineering
,
1997,
The Computer Science and Engineering Handbook.