IMPROVING THE REALITY PERCEPTION OF VISUALLY IMPAIRED THROUGH PERVASIVE COMPUTING
暂无分享,去创建一个
The visually impaired experience serious difficulties in leading an independent life, due to their reduced perception of the environment. However, we believe that ubiquitous computing can significantly improve the perception of the surrounding reality for the blind and visually impaired. In this paper we describe the Chatty Environment, a system that addresses this problem and has been developed after a series of interviews with potential users. The system, which reveals the surroundings to the user by speech output, is usable in both indoor and outdoor contexts. 1. Everyday Problems of the Visually Impaired Most blind and visually impaired people confront serious difficulties when finding themselves in new, unknown environments. A cane or a guidance dog won’t be enough to enable them to find their way through an unknown city, not even in a less complex environment, like an airport terminal or a university building. Many other problems encountered by the visually impaired don’t seem obvious to sighted people. In a supermarket, for example, the blind person has great trouble finding the needed items, since all packed food feels similar. Without external help, he or she will only go to the known local supermarket and only buy a few items in learned locations. Another problem most sighted people are unaware of, is that the visually impaired will often not be able to catch a bus because of its brief stop at the station, which is too short to allow him/her to find the bus’ door and the button to be pushed for opening it. Here again, blind people have to rely on external help. Why do visually impaired confront such difficulties in leading an independent life? The cause probably lies in the nature of how humans use their senses to perceive the world – most people, when asked, will identify sight as the most important sense. This subjective impression is supported by anatomical facts. The brain region processing the visual input is with about 10 billion neurons more than five times larger than the brain regions handling any other sensorial input. Sight being the most important among the human senses, the modern world is tailored to meet this fact, which worsens the problem for the visually impaired. When constructing busses with buttons for opening the doors, it is likely that nobody thought of blind people and the trouble they will have finding those buttons. Certainly, with the ever increasing miniaturization of electronic devices, the rapidly increasing understanding of human genetics and brain functioning, and the possible emergence of hybrid neuronal∗Swiss Federal Institute of Technology (ETH) Zurich, 8092 Zurich, Switzerland, coroama@inf.ethz.ch electronic circuits, blindness could be eradicated in a foreseeable but distant future. Miniature cameras, installed in the eyeballs, would then transmit their images directly to the brain. Until medicine ultimately reaches this goal, however, we believe that pervasive and ubiquitous computing technology can be used to help the visually impaired gain an increased quality of life and a higher degree of independence. 2. The Chatty Environment Bearing the difficulties encountered by blind people in mind, we proposed the paradigm of a chatty environment [7, 8], a system that enhances the visual information by other means of sensorial input that can be experienced by the visually impaired, i.e. spoken information. While moving through the chatty environment, this spoken information is continuously presented to the visually impaired user. Thus, he finds out how his surroundings are shaped and which entities exist around him, e.g., where the incoming bus goes to and where its nearest door is located, which package of food he is holding in his hand in the supermarket, or where the next fast-food-restaurant is located. The visually impaired is also able to get more in-depth information on selected parts of the environment and may even perform actions on some of these entities. Figure 1. The virtual aura of tagged real-world objects. The chatty environment is realized by using pervasive computing technologies to enhance the environment’s real-world objects with a virtual component, that holds information about the corresponding object. In the real-world, each object possesses a beacon, which creates a virtual aura around the object (see figure 1). When the user moves into the aura of such an enhanced real-world entity (or when the entity moves towards the person, as in the case of a bus), the device carried by the user – the world explorer – tells her about the object’s existence and offers her a standardized interface for interacting with it. This feature – the environment endlessly speaking to the user, telling her about the surroundings – might seem annoying to most sighted people. However, during our interviews with blind and visually impaired people, we learned that there can almost never be too much spoken input for the visually impaired. To better understand the specific needs of visually impaired people, we conducted a series of interviews with blind and visually impaired persons. The next section summarizes the most relevant results of these interviews. 3. Visually Impaired User Study Blind and visually impaired people have a different perception of the world than sighted people. Relevant differences also exist between the perception of completely blind and of those with a certain level of visual impairment. Any system designed for the visually impaired has to be aware of these differences in order to provide a user interface adapted to the limitations and special needs of its users. To this extent, a series of nine interviews (5 women, 4 men) with blind and visually impaired allowed us to get valuable information for the system design. The medium age of the questioned people was 54 years, in the range from 30 to 81 years. They live in different regions of Switzerland and their educational level varies from high-school level to university level. The impairments of the interviewed range from total blindness to 40% of sight. Interviews were conducted in two steps: All interviewed first answered a questionnaire comprising 20 questions, ranging from general information about their age, profession or impairment grade to precise questions about use of handheld devices, preferred inand output methods and particular requirements for object descriptions. The interview was also based on the “open-end” principle, each participant being able to add any information or suggestion considered to be relevant. The interviews were about one hour long. After conducting these interviews, we derived a list of requirements for an assistance pervasive computing system aimed for the blind and visually impaired. According to the survey, valuable would be a system that: 1. increases the user’s perception of the surroundings by telling her which entities she is passing by. This seems to be the most important user requirement – to have an extension of their own world perception by having the environmental entities in their immediate neighborhood announced to them, 2. helps in an environment with many small items (e.g., products in the supermarket) as well, by answering questions like: “which item am I holding in my hand?” or “where is the huckleberry jelly?”, 3. does not require the user to pinpoint to a certain location to get the desired information (this being especially difficult for completely blind people), 4. announces points of interest located further away, 5. helps them navigate to these points of interest, outdoor as well as indoor (especially relevant for complex buildings like airport terminals or large office buildings), 6. lets them filter objects (like fast-food-restaurants or restrooms) according to a classification and then presents a list of such entities situated in the neighborhood, so that the user may subsequently choose to be guided to either the nearest instance or another one from the list, 7. enables communities to emerge, by allowing the user to leave marks or remainders for herself and/or other users, (e.g., a message on a traffic light “large crossroad ahead, must be passed quickly”). There was more valuable data gathered from the interviews. We learned that most interviewed would not be disturbed by objects that “speak”. Basically, they imagine that these objects would help them find their way more easily, even without an explicit guidance aid. Speech is the preferred output medium for almost all interviewed people. Some of them could find utility in additional signalling techniques, such as vibration or non-speech audio signals (i.e., beep). All rejected the idea of force feedback on the blind cane, as the normal use of the cane would be altered too much through this technique. All participants would like to carry the device in the pocket (handbag for women) or hanged around the neck, in order to keep the hands free for other functions. Hence, the acoustic output needs to be transmitted to the user via a headset-device. Nevertheless, most important about speech output (and audio output in general) is the fact that the normal hearing of the user must not be altered by this system. Blind need stereometric hearing in order to determine for example the direction of moving obstacles. Therefore, any kind of headphones or earphones used has to comply with this requirement. This excludes stereo headphones; mono headphones are suitable if they let environmental sounds muddle through.
[1] Vlad Coroamă,et al. The Chatty Environment – Providing Everyday Independence to the Visually Impaired , 2003 .
[2] Vlad C. Coroama. The Chatty Environment - A World Explorer for the Visually Impaired , 2003 .
[3] Jürgen Bohn,et al. Robust Probabilistic Positioning based on High-Level Sensor-Fusion and Map Knowledge , 2003 .