An important challenge that presents itself to designers of mobile computing devices is: how can interfaces for small screens be structured to present users with opportunities for efficient interaction with a plethora of (visually overwhelming) materials such as telecommunications, calendars and contact information, and the Web? We propose a low-cost and immediately implementable solution to the problem in the form of three-dimensional (3D) audio display. This solution introduces an expansion of the notion of “interface” by extending the user’s perception-action space beyond the dimensions of the display to encompass a virtual 3D auditory space surrounding the user. In this paper we present a high level outline of our 3D audio windowing system and its associated suite of utilities for exploiting 3D space in information display. INTRODUCTION With the rapid growth of networked and integrated computing devices, a user’s interaction with his/her computing device is becoming increasingly multi-tasking: while immersed in a foreground task, the user is typically engaged in monitoring multiple simultaneously running background tasks with widely varying response times. For the small screen mobile computer user, multi-tasking (and utilisation of remotely networked resources) offers the possibility of much richer computing experience; however, existing interface architectures do not support efficient interactions of this sort. Presently, information-access rates are strongly limited by screen size and, as most manufacturers of modern computing devices are aimed at minimising device size, this (visual) information bottleneck looks likely to tighten. The prime motivation underlying our work is to overcome this information-access rate barrier through the development of alternative modality interfacing tools. This paper takes a high level look at a new project which combines rapidly developing 3D audio tools with tried and tested graphical user interface (GUI) techniques for exploiting space in information representation. Current research Our work so far has focussed on how the limitations of display size can be minimised by the addition of structured non-speech sounds [1, 2]. One problem with mobile devices is that they have a limited amount of screen space: the screen cannot be large as the device must be able to fit into the hand or pocket to be easily carried. As the screen is small it can easily become cluttered with information as designers try to cram on as much as possible. In many cases desktop widgets (buttons, menus, windows, etc.) have been taken straight from standard graphical interfaces (where screen space is not a problem) and applied directly to mobile devices. This has resulted in devices that are hard to use, with small text that is hard to read, cramped graphics and little contextual information. One way to solve the problem is to substitute non-speech audio cues for visual ones. Sound can be used to present information about widgets so that their size can be reduced. This would mean that the clutter on the display can be diminished and/or allow more information to be presented. Results from two studies on the 3Com PalmIII [1, 2] showed that the usability of large and small onscreen buttons can be improved by the addition of simple sounds. This includes more data being entered and workload reduced when using the sonically-enhanced buttons. The limitation of this approach is that the auditory display space on mobile devices is small. Devices usually have just a single loudspeaker so sounds can only be presented as coming from a single point in space. The problem arises that the auditory display space can itself become cluttered if too many sounds are presented at the same time. One way to solve this would be to incorporate more speakers or allow the user to wear headphones. Another speaker would add cost and weight, and could not be positioned far enough from the other to give good stereo separation. Headphones can provide stereo and full threedimensional (3D) sound. The disadvantages are that users are tied to the device by a cable and their ears are covered. However, users commonly wear headphones with personal stereos, ‘hands-free’ kits are also common with mobile
[1]
Michael Cohen,et al.
Integrating Graphic and Audio Windows
,
1992,
Presence: Teleoperators & Virtual Environments.
[2]
Chris Schmandt,et al.
AudioStreamer: exploiting simultaneity for listening
,
1995,
CHI 95 Conference Companion.
[3]
Michael Cohen,et al.
Throwing, Pitching and Catching Sound: Audio Windowing Models and Modes
,
1993,
Int. J. Man Mach. Stud..
[4]
A. Schulman,et al.
Recognition memory and the recall of spatial location
,
1973,
Memory & cognition.
[5]
J. Mandler,et al.
On the coding of spatial information
,
1977,
Memory & cognition.
[6]
Stephen A. Brewster,et al.
Maximising screen-space on mobile computing devices
,
1999,
CHI Extended Abstracts.
[7]
John F. Whitehead.
The Audio Browser - An Audio Database Navigation Tool in a Virtual Environment
,
1994,
ICMC.
[8]
Stephen A. Brewster.
Sound in the interface to a mobile computer
,
1999,
HCI.
[9]
Chris Schmandt,et al.
Dynamic Soundscape: mapping time to space for audio browsing
,
1997,
CHI.
[10]
Michael Cohen,et al.
Multidimensional Audio Window Management
,
1991,
Int. J. Man Mach. Stud..
[11]
Stephen Brewster,et al.
Trading Space for Time in Interface Design
,
1999
.
[12]
Michael Cohen,et al.
Extending the notion of a window system to audio
,
1990,
Computer.