Sonically exposing the desktop space to non-visual users: an experiment in overview information presentation

The vast majority of computer interfaces do not translate well onto non-visual displays (e.g. for blind users, wearable/mobile computing, etc). Screen readers are the most prevalent aural technology to expose graphical user interfaces to the visually impaired. However, they eliminate many of the advantages of direct manipulation and WYSIWYG applications. While the use of sound in interfaces has become more prevalent due to advancement in sound cards for computers, it is still primarily for alerts and status-reporting. The use of sound can be expanded to enhance or replace a GUI by providing a 3D auditory environment. However, users of this environment would need a reliable and effective method of navigation. Little is known of the usability of a system based on sound identification and localisation. In this work, we describe an experiment which will examine users’ ability to navigate a 3D auditory environment based on these concepts.