ASSESSING RESPONDENTS' NEED FOR CLARIFICATION IN WEB SURVEYS USING AGE-BASED USER MODELING

INTRODUCTION Respondents in standardized surveys tend to assume that their definitions of everyday terms such as “bedroom” or “job” must match those of the survey designers, even though we know that they often differ substantially. Even when they are offered clarification, they often do not request it because they do not think that it is needed. In our earlier studies of telephone interviews, we found that respondents answer more accurately when they receive clarification about question meaning (Schober & Conrad 1997, Conrad & Schober 2000) This is also true for web survey interfaces (Schober & Conrad, 1998) and applies whether the respondent requests the clarification (by clicking to get official definitions) or the system offers unsolicited clarification. The distinction between respondents requesting clarification and systems offering it reflects a longstanding debate in the human-computer interaction community between two approaches to interface design: those that emphasize giving users control (e.g. Shneiderman, 1997), where users can adjust the interface as desired, and those that emphasize user modeling, where interfaces automatically adapt to different users (Maes, 1994). In this study, we contrast typical web survey interfaces (usually standardized for everyone) with interfaces based on user control and also on user modeling (e.g. Kay, 1995). We implemented simple user models that diagnosed respondent uncertainty. If respondents were inactive (no clicks, no typing) for more than a particular duration, this was treated as a signal of uncertainty and triggered the system to clarify the likely source of uncertainty by providing a definition. We contrasted two variants of this type of usermodel. One was a generic model, with thresholds based on how long an average user took to answer a particular question. The second was a group-based model, with thresholds based on how long average users within different groups took to answer a particular question. For this study, we formed our groups based on age. Survey methods research has shown that age affects responding, largely because working memory declines (e.g., Knauper, 1999). More germane to our application, the cognitive aging literature documents a more general slowing of behavior with age (e.g., Salthouse, 1976). Therefore one might expect older web survey users’ response times to be slower than younger users’ times. If that’s the case, the same period of inactivity by old and young users may mean different things; a short lag may indicate confusion for a young user but simply ordinary thinking for an older user. In the current study we contrasted five user interfaces in the laboratory. In the first there was no clarification available to users. The second was user-initiated, where clarification was available if the user requested it by clicking. The third embodied a generic user model, where the respondent could request clarification but the system provided clarification if the respondent’s inactivity exceeded a fixed threshold. The fourth was built around group-based user models, identical in approach to generic user models except that the inactivity threshold was differed for different groups of respondents. In the fifth interface, the definition always appeared with the survey question.