Quantitative Analysis of Desirability in User Experience

The multi-dimensional nature of user experience warrants rigorous assessment of the interactive experience in systems. User experience assessments are based on product evaluations and subsequent analysis of the collected data using quantitative and qualitative techniques. The quality of user experience assessments are dependent on the effectiveness of the techniques deployed. This paper presents the results of a quantitative analysis of desirability aspects of the user experience in a comparative product evaluation study. The data collection was conducted using 118 item Microsoft Product Reaction Cards (PRC) tool followed by data analysis based on the Surface Measure of Overall Performance (SMOP) approach. The results of this study suggest that the incorporation of SMOP as an approach for PRC data analysis derive conclusive evidence of desirability in user experience. The significance of the paper is that it presents a novel analysis method incorporating product reaction cards and surface measure of overall performance approach for an effective quantitative analysis which can be used in academic research and industrial practice.

[1]  Friederike Behringer,et al.  Striking Differences in Continuing Training in Enterprises across Europe: Comprehensive Overview of Key Results of CVTS 2 , 2005 .

[2]  Mike Kuniavsky,et al.  Smart Things: Ubiquitous Computing User Experience Design , 2010 .

[3]  Austin Henderson,et al.  Interaction design: beyond human-computer interaction , 2002, UBIQ.

[4]  Virpi Roto,et al.  User Experience Evaluation Methods in Academic and Industrial Contexts , 2009 .

[5]  Mike Kuniavsky,et al.  Observing the User Experience: A Practitioner's Guide to User Research (Second Edition) , 2013, IEEE Transactions on Professional Communication.

[6]  Pam J. Mayhew,et al.  How many participants are really enough for usability studies? , 2014, 2014 Science and Information Conference.

[7]  Carol M. Barnum,et al.  Tapping into Desirability in User Experience , 2010 .

[8]  Holger Schütz,et al.  Broadening the Scope of Benchmarking , 1999 .

[9]  John Campbell,et al.  A Design Science Framework for Designing and Assessing User Experience , 2011, HCI.

[10]  Carol M. Barnum,et al.  More than a feeling: understanding the desirability factor in user experience , 2010, CHI EA '10.

[11]  Marc Hassenzahl,et al.  User experience - a research agenda , 2006, Behav. Inf. Technol..

[12]  A. Mayer,et al.  Benchmarking national labour market performance: a radar chart approach , 1999 .

[13]  Nigel Bevan,et al.  What is the difference between the purpose of usability and user experience evaluation methods , 2009 .

[14]  Mike Kuniavsky,et al.  Observing the User Experience: A Practitioner's Guide to User Research (Morgan Kaufmann Series in Interactive Technologies) (The Morgan Kaufmann Series in Interactive Technologies) , 2003 .

[15]  Holger Schütz,et al.  Benchmarking labour market performance and labour market policies: Theoretical foundations and applications , 1998 .

[16]  Rex Hartson,et al.  The UX book, process and guidelines for ensuring a quality user experience by Rex Hartson and Pardha S. Pyla , 2012, SOEN.

[17]  L. Faulkner Beyond the five-user assumption: Benefits of increased sample sizes in usability testing , 2003, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.