Algorithm for dynamic disparity adjustment
暂无分享,去创建一个
This paper presents an algorithm for enhancing stereo depth cues for moving computer generated 3D images. The algorithm incorporates the results from an experiment in which observers were allowed to set their preferred eye separation with a set of moving screens. The data derived from this experiment were used to design an algorithm for the dynamic adjustment of eye separation (or disparity) depending on the scene characteristics. The algorithm has the following steps: (1)Determine the near and far points in the computer graphics scene to be displayed. This is done by sampling the Z buffer. (2) Scale the scene about a point corresponding to the midpoint between the observer's two eyes. This scaling factor is calculated so that the nearest part of the scene comes to be located just behind the monitor. (3) Adjust an eye separation parameter to create stereo depth according to the empirical function derived from the initial study. This has the effect of doubling the stereo depth in flat scene but limiting the stereo depth for deep scenes. Steps 2 and 3 both have the effect of reducing the discrepancy between focus and vergence for most scenes. The algorithm is applied dynamically in real time with a damping factor applied so the disparities never change too abruptly.