Audio presentation of auto-suggest lists
暂无分享,去创建一个
One of the most significant advances behind World Wide Web (Web) 2.0 is the ability to allow parts of a Web page to be updated independently. This can provide an exciting, interactive experience for sighted users, who are used to dealing with complex visual information. For visually impaired users, however, these pages may be confusing: updates are sometimes not recognised by screen readers, while in other cases they may interrupt the user inappropriately. The SASWAT project aims to develop a model of how sighted users interact with dynamic updates, and use this to identify the most effective ways of presenting updates through an audio information stream. Here, we describe a 'thin slice' through this project, focusing on one form of update --- the auto-suggest list. These provide the user with suggestions for entry into an input text field, updating with each character typed. Experiments with sighted users suggest that the suggestions receive considerable attention, and appear to offer reassurance that the input is reasonable. Suggestions that are further down the list are less likely to be viewed, and receive fewer and shorter fixations than those at the top. We therefore propose an implementation which presents the first 3 suggestions immediately and allows browsing of the rest.
[1] Andy Brown,et al. A Review of Assistive Technologies: Can users access dynamically updating information? , 2008 .
[2] Becky Gibson,et al. Enabling an accessible web 2.0 , 2007, W4A '07.
[3] Peter Thiessen,et al. Ajax live regions: chat as a case example , 2007, W4A '07.