Evaluating artefacts with children: age and technology effects in the reporting of expected and experienced fun

In interaction design, there are several metrics used to gather user experience data. A common approach is to use surveys with the usual method being to ask users after they have experienced a product as to their opinion and satisfaction. This paper describes the use of the Smileyometer (a product from the Fun Toolkit) to test for user experience with children by asking for opinions in relation to expected as well as experienced fun. Two studies looked at the ratings that children, from two different age groups and in two different contexts, gave to a set of varied age-appropriate interactive technology installations. The ratings given before use (expectations) are compared with ratings given after use (experience) across the age groups and across installations. The studies show that different ratings were given for the different installations and that there were age-related differences in the use of the Smileyometer to rate user experience; these firstly evidence that children can, and do, discriminate between different experiences and that children do reflect on user experience after using technologies. In most cases, across both age groups, children expected a lot from the technologies and their after use (experienced) rating confirmed that this was what they had got. The paper concludes by considering the implications of the collective findings for the design and evaluation of technologies with children

[1]  Stuart MacFarlane,et al.  Using the fun toolkit and other survey methods to gather opinions in child computer interaction , 2006, IDC '06.

[2]  Panos Markopoulos,et al.  SCORPIODROME: an exploration in mixed reality social gaming for children , 2005, ACE '05.

[3]  Kirsten Risden,et al.  Guidelines for usability testing with children , 1997, INTR.

[4]  Bieke Zaman,et al.  Introducing a Pairwise Comparison Scale for UX Evaluations with Preschoolers , 2009, INTERACT.

[5]  P. Vaillancourt,et al.  STABILITY OF CHILDREN'S SURVEY RESPONSES , 1973 .

[6]  Janet C. Read,et al.  Validating the Fun Toolkit: an instrument for measuring children’s opinions of technology , 2008, Cognition, Technology & Work.

[7]  Gavin Sim,et al.  Assessing usability and fun in educational software , 2005, IDC '05.

[8]  Peggy Gregory,et al.  An Investigation of Participatory Design with Children – Informant, Balanced and Facilitated Design , 2002 .

[9]  Joop J. Hox,et al.  Item Nonresponse in Questionnaire Research with Children , 2001 .

[10]  Marc Hassenzahl,et al.  User experience - a research agenda , 2006, Behav. Inf. Technol..

[11]  J. Krosnick Response strategies for coping with the cognitive demands of attitude measures in surveys , 1991 .

[12]  Michael Burmester,et al.  Hedonic and ergonomic quality aspects determine a software's appeal , 2000, CHI.

[13]  Richard Bell,et al.  A manual for repertory grid technique , 1977 .

[14]  N. Borgers,et al.  Response Effects in Surveys on Children and Adolescents: The Effect of Number of Response Options, Negative Wording, and Neutral Mid-Point , 2004 .

[15]  J D Powers,et al.  Predictors of a child's ability to use a visual analogue scale. , 2003, Child: care, health and development.

[16]  Vero Vanden Abeele,et al.  Laddering with young children in User eXperience evaluations: theoretical groundings and a practical case , 2010, IDC.

[17]  Wolmet Barendregt,et al.  Development and evaluation of the problem identification picture cards method , 2008, Cognition, Technology & Work.