Alignment Errors between Real and Virtual World in Augmented Reality

In the last decade, augmented reality (AR) technology was of significant interest in the medical field. However, designing a medical AR system is still challenging, because the expectations on accuracy and stability in such a setting are high. Misalignment of the computer-generated objects would greatly compromise the utility of such systems. Especially in medical training, where the viewpoint of the trainee can largely vary, such inaccuracies can deteriorate manipulative skill training and the sense of presence and thus reduce the training effect. Our project is to develop a medical training simulator for othopaedic surgery by using AR techniques. The presented study deals with the different error sources encountered in developing such a system. Our setup consists of a camera-mounted marker to estimate the user’s head pose and a tracking device reporting the marker’s location. During the implementation of such a system two major types of errors may encounter: pose errors and latencies. Pose errors lead to misalignment between the virtual and real world due to the wrong estimation of camera position in the world coordinate system. Latencies are caused by the delay in the tracking device, in sending images from the camera to the head mounted display and in generating virtual objects. Besides those errors, lighting conditions or occlusions also have an influence on the quality of the illusion that virtual and real world are merged together. Even by using a commercial tracking system providing marker positions within very low error limits, the subpixel backprojection accuracy required by our application cannot be achieved. One of the issues is the estimation of the transformation between the marker and the camera frame. We have set up a calibration method based on infrared LEDs in order to reduce the number of error sources. We developed a simulation framework to predict and analyse the achievable accuracy of the backprojection. We simulated the tracking error due to the noisy data reported by the tracking device. We have shown that modeling the tracking errors as a biased pose depending on the marker location provides results close to the experimental data. Those errors lead to jitter in the output data. We attempted to filter the tracking data in order to diminish the jitter effects, but the virtual object still lags behind its true position while user movement. In order to correct those errors, we aim at to dynamically measure the registration error in the images. To do this, we are currently working on an image based correction by using some landmarks placed in the real scene. Since those landmarks have been calibrated relative to the tracking device frame, we have a first approximation of their 2D positions in the images. Then, a vision based tracker is used to refine the 2D locations. By using that information, we attempt to correct the misregistration between the marker and the camera frame. In the future we will explore the power of this dynamic correction and analyse the obtained results. We will then implement a stereoscopic system and embed a light-probe in order to further reduce the perceivable difference between real and virtual objects in the scene.