HAPTIC EXPLORATION OF VIRTUAL BUILDINGS USING NON-REALISTIC HAPTIC RENDERING

Abstract This paper argues that the goal of reducing the differences between features within a 3D model and the use of 3D haptic space of the force feedback device is not necessarily desirable. We present the idea of a non-realistic haptic rendering system. We examine in detail the mapping of 3D data structures onto 3D haptic space and show the use of information layers which are based on a logical decomposition of the information space to be presented, rather than a purely geometric decomposition of an underlying geomertic model. As an example for the different techniques we use the virtual reconstruction of the palace of Otto the Great. 1 Introduction Current applications which use force feedback devices aim at improving haptic simulation and the rendering algorithms in connection with objects or geometric models. Depending on the task there are different directions of research such as the simulation of surface properties (for example haptic textures, stiffness as well as static and dynamic friction [3]), static objects (for example visco-elastic and plastic objects [10]), dynamic objects (for example buttons, sliders, etc. 1]), interaction [techniques (point- and ray-based rendering [5]), and the manipulation of objects (see [7]). The common goal of all these techniques is to improve the perceivable realism of the simulated haptic world. However, Colwell, Petrie et al. [2] how in their studies that blind and visually impaired peoples have various serious problems making use of force feedback devices for exploring these realistic virtual worlds. The first result is that users temporarily get lost in the virtual haptic space. Especially after exploring an individual object of a scene, users have difficulties keeping the probe in contact with the object. This makes the recognition of the shape of objects more difficult. The second important point which they observed is that users often have difficulties recognizing complex objects. This holds true even when these objects are constructed from simple components like cuboids which are easy to recognize in isolation. Finally, blind users are not very good at assessing the sizes of objects they have touched in a 3D haptic environment. All aspects were observed under the condition that the virtual worlds were created using realistic representations. These results show that the abilities of users to recognize objects are limited when using force feedback devices. In this paper, we present methods for non-realistic haptic rendering (NRHR) which reduces the complexity of the geometric data to be displayed and which uses additional representation and interaction techniques in compensation. n this context, we examine the mapping of a 3D modelI onto the useable haptic space in detail and divided it into two different parts. On the one hand, we