Cast shadows in augmented reality systems

Augmented Reality (AR) systems insert graphics objects into the images of real scenes. Geometric and photometric consistency has to be achieved to make AR systems effective and bring photorealism to the augmented graphics. Particularly, global illumination effects between the graphics objects and scene objects need to be simulated. This thesis investigates ways to improve AR rendering by creating cast shadows between the real and graphics objects. This requires knowledge about the scene lighting and the scene structure. We first give novel methods of recovering the light sources from the input images. For indoor scenes, we take advantage of scene regularities such as parallel and orthogonal walls. For outdoor scenes and indoor scenes where the lights can be approximated by a directional light source, we show a method of finding the light source from cast shadows present in the real scene. Besides the light source structure, we also need to know the 3D structure of the scene so that we can render the shadow cast on a real object by a graphics object. Using spheres as primitives, we develop an algorithm to approximate the shape of the scene objects from multiple silhouettes. With all the above components, one can build an AR system that infers necessary information of the scene from shadows and inserts graphics object with convincing shadows. To justify our endeavor in terms of shadows being important in human spatial perception, we investigate shadow perceptions in the context of cue integration.