Gaze-directed adaptive rendering for interacting with virtual space

This paper presents a new method of rendering for interaction with 3D virtual space with the use of gaze detection devices. In this method, hierarchical geometric models of graphic objects are constructed prior to the rendering process. The rendering process first calculates the visual acuity, which represents the importance of a graphic object for a human operator, from the gaze position of the operator. Second, the process selects a level from the set of hierachical geometric models depending on the value of visual acuity. That is, a simpler level of detail is selected where the visual acuity is lower, and a more complicated level is used where it is higher. Then, the selected graphic models are rendered on the display. This paper examines three visual characteristics to calculate the visual acuity: the central/peripheral vision, the kinetic vision, and the fusional vision. The actual implementation and our testbed system are described, as well as the details of the visual acuity model.