The Chatty Environment – Providing Everyday Independence to the Visually Impaired

Visually impaired persons encounter serious difficulties in conducting an independent life, which are inherent to the nature of their impairment. In this paper, we suggest a way of deploying ubiquitous computing technologies to cope with some of these difficulties. We introduce the paradigm of a chatty environment, that reveals itself by talking to the visually impaired user, and thus offering her some of the information she would otherwise miss. We also describe an initial prototype of the chatty environment. Finally, we briefly analyze the potential system benefits and argue that the visually impaired are ideal early technology adopters in the pervasive healthcare field. 1 Independent Living for the Visually Impaired Visual impairment has one important characteristic: the blind person is in need of guidance and assistance. A deaf person can perfectly orient herself in new environments. Many wheelchair users are also able to lead a normal, independent life, in part thanks to the wheelchair access requirements in many countries. Not so the visual impaired. The typical blind person usually has a good sense of orientation only in her immediate neighborhood: at home, in her street, or on the short walk to work. But for the rest of the world, she is highly dependent on external help. Think for example of shopping in the supermarket. Thousands of items, feeling all the same, spread over hundreds of shelves. Visually impaired people will therefore only go shopping to their local supermarket and buy only few products in well known locations. Or think of a modern airport terminal or railway station. The blind person will not be able to find the way by herself – an architecture with several floors connected in the most intricate ways is simply too complex to comprehend without any visual overview. Why do visually impaired have more difficulties than people with other physical impairments with respect to leading an independent life? The explanation is inherent to the way we humans use our senses: most of the information about the surroundings is gathered visually. Our eyes have about 120 million receptors, while the ears do their job with about 3000 receptors. The brain region responsible for processing visual input is five times larger than the region handling the audio input. These are strong indications that the amount of visual information significantly exceeds other senses.1 In order to allow the visually impaired a higher degree of independence, we propose the paradigm of the chatty environment. This environment tries to make some of the visual information available also to the visually impaired. For that, it uses other media – a first prototype is based on audio, for later system versions, other channels, like tactile feedback, are possible. 2 The Chatty Environment The main goal of the system is to use alternative means to present the visually impaired user the input that sighted people get through the visual channel. In a first, naive approach, you may think of the user wandering around in the real-world and the world keeps talking to him, thus continuously revealing its existence and characteristics: “Here is the shelf with milk products, behind you are the fridges with meat and ice”, “Here is track 9, do you want more information on the departing trains?” This feature of the system will probably seem annoying to most sighted people. An environment talking endlessly to the user sounds like a headache to many of us that we would surely turn off after a few minutes. However, speaking to members of the Swiss Association of the Blind, it turns out that for visually impaired people, there can almost never be too much audio input. This is comparable to the huge amount of visual information sighted people pick up every second, few of which they really use. Here, too, it feels far from annoying to continuously receive that much unnecessary information since one has learned to focus on the interesting aspects only. 2.1 The System – Overview We are currently in the process of building a prototype of the chatty environment as part of the ETH Zurich campus. The main components of the prototype are: – Tagged Entities in the environment. A large number of the chatty environment’s entities are tagged through electronic beacons. Thus, a virtual aura arises around the tagged real-world entities (see figure 1). Beacons are small active or passive electronic devices. Like beacons on the coastline, they attract your attention to special facilities. Depending on how much the environment is networked, there can be many beacons in the user’s range. Beacons come and go as the user moves, thus she has to be continuously informed about entering and leaving beacons. – World Explorer. The world explorer is a device carried by the user and is the interface between user and the tagged entities in the environment. When the user moves into the aura of an object, the explorer senses the object and mediates the information exchange between user and object. It does this through a standard interface, described in section 2.2. 1 One example where one can see the lack of independence of visually impaired people is car driving. While wheelchair users or deaf people are usually very well able and in most countries allowed to drive a car, this is unthinkable for blind persons. Fig. 1. The virtual aura of tagged real-world objects. – Virtual Counterparts of real-world objects. Beacons are typically small devices with limited ressources. Therefore, the objects usually have digital representations, so-called virtual counterparts, which typically reside on an Internet server. The counterpart’s URL is the first information the world explorer gets from the object’s beacon. Hence, even if the contact to the beacon is lost quickly, the explorer can gain all the information from the virtual counterpart. – Communication Infrastructure. To access virtual counterparts and their data, the world explorer uses the background communication infrastructure. Therefore, the explorer is equipped with Bluetooth and WLAN 802.11 communication facilities. More details on system components as well as design decisions and their motivation are presented in section 2.3. First, however, we present another central system component: the audio user interface.