Enriching Task Models with Usability and User Experience Evaluation Data

Evaluation results focusing on usability and user experience are often difficult to be taken into account during an iterative design process. This is due to the fact that evaluation exploits concrete artefacts (prototype or system) while design and development are based on more abstract descriptions such as task models or software models. As concrete data cannot be represented, evaluation results are just discarded. This paper addresses the problem of discrepancy between abstract view of task models and concrete data produced in evaluations by first, describing the requirements for a task modelling notation: (a) representation of data for each individual participant, (b) representation of aggregated data for one evaluation as well as (c) several evaluations and (d) the need to visualize multi-dimensional data from the evaluation as well as the interactive system gathered during runtime. Second: by showing how the requirements were integrated in a task modelling tool. Using an example from an experimental evaluation possible usages of the tool are demonstrated.

[1]  Philippe A. Palanque,et al.  An Approach for Assessing the Impact of Dependability on Usability: Application to Interactive Cockpits , 2014, 2014 Tenth European Dependable Computing Conference.

[2]  Andy P. Field,et al.  Discovering Statistics Using Ibm Spss Statistics , 2017 .

[3]  Marco Winckler,et al.  Formal tasks and systems models as a tool for specifying and assessing automation designs , 2011, ATACCS.

[4]  Myra B. Cohen,et al.  Human performance regression testing , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[5]  John Zimmerman,et al.  Measuring the dynamics of remembered experience over time , 2010, Interact. Comput..

[6]  Philippe A. Palanque,et al.  Validating interactive system design through the verification of formal task and system models , 1995, EHCI.

[7]  Gilbert Cockton,et al.  Understanding Inspection Methods: Lessons from an Assessment of Heuristic Evaluation , 2001, BCS HCI/IHM.

[8]  Regina Bernhaupt,et al.  Evaluating User Experience for Interactive Television: Towards the Development of a Domain-Specific User Experience Questionnaire , 2013, INTERACT.

[9]  Philippe A. Palanque,et al.  A generic tool-supported framework for coupling task models and interactive applications , 2015, EICS.

[10]  Donald A. Norman,et al.  User-centered systems design , 1986 .

[11]  Carl Gutwin,et al.  Task analysis for groupware usability evaluation: Modeling shared-workspace tasks with the mechanics of collaboration , 2003, TCHI.

[12]  Regina Bernhaupt,et al.  Investigating usability and user experience as possible entry barriers for touch interaction in the living room , 2010, EuroITV.

[13]  Ronald M. Baecker,et al.  Readings in human-computer interaction : toward the year 2000 , 1995 .

[14]  Harry Hochheiser,et al.  Research Methods for Human-Computer Interaction , 2008 .

[15]  Matt Jones,et al.  Mobile Interaction Design , 2006 .

[16]  Marco Winckler,et al.  Rapid Task-Models Development Using Sub-models, Sub-routines and Generic Components , 2014, HCSE.

[17]  Ben Shneiderman,et al.  The eyes have it: a task by data type taxonomy for information visualizations , 1996, Proceedings 1996 IEEE Symposium on Visual Languages.

[18]  Erik Hollnagel,et al.  Cognitive reliability and error analysis method , 1998 .

[19]  Marco Winckler,et al.  Beyond modelling: an integrated environment supporting co-execution of tasks and systems models , 2010, EICS '10.

[20]  James Reason,et al.  Human Error , 1990 .

[21]  Philippe A. Palanque,et al.  Task Model-Based Systematic Analysis of Both System Failures and Human Errors , 2016, IEEE Transactions on Human-Machine Systems.

[22]  Christelle Farenc,et al.  User Interface Evaluation : is it Ever Usable? , 1995 .

[23]  Marco Winckler,et al.  Tailoring Usability into Agile Software Development Projects , 2008, Maturing Usability.

[24]  Marco Winckler,et al.  Structuring and Composition Mechanisms to Address Scalability Issues in Task Models , 2011, INTERACT.

[25]  Regina Bernhaupt,et al.  User Experience as a Parameter to Enhance Automation Acceptance: Lessons from Automating Articulatory Tasks , 2015, ATACCS '15.

[26]  Fabio Paternò,et al.  ConcurTaskTrees: A Diagrammatic Notation for Specifying Task Models , 1997, INTERACT.

[27]  Marco Winckler,et al.  Model-based training: an approach supporting operability of critical interactive systems , 2011, EICS '11.

[28]  Philippe A. Palanque,et al.  Extending procedural task models by systematic explicit integration of objects, knowledge and information , 2013, ECCE.

[29]  Fabio Paternò,et al.  CTTE: Support for Developing and Analyzing Task Models for Interactive System Design , 2002, IEEE Trans. Software Eng..

[30]  Philippe A. Palanque,et al.  User-Test Results Injection into Task-Based Design Process for the Assessment and Improvement of Both Usability and User Experience , 2016, HCSE/HESSD.

[31]  G. W. Furnas,et al.  Generalized fisheye views , 1986, CHI '86.

[32]  Pieter Desmet,et al.  Designing Products with Added Emotional Value: Development and Appllcation of an Approach for Research through Design , 2001 .

[33]  Philippe A. Palanque,et al.  Enhanced Task Modelling for Systematic Identification and Explicit Representation of Human Errors , 2015, INTERACT.

[34]  J. B. Brooke,et al.  SUS: A 'Quick and Dirty' Usability Scale , 1996 .