Digital light field photography

Focusing images well has been difficult since the beginnings of photography in 1839. Three manifestations of the problem are: the chore of having to choose what to focus on before clicking the shutter, the awkward coupling between aperture size and depth of field, and the high optical complexity of lenses required to produce aberration-free images. These problems arise because conventional cameras record only the sum of all light rays striking each pixel on the image plane. This dissertation presents a unified solution to these focus problems by instead recording the light field inside the camera: not just the position but also the direction of light rays striking the image plane. I describe the design, prototyping and performance of a digital camera that records this light field in a single photographic exposure. The basic idea is to use an array of microlenses in front of the photosensor in a regular digital camera. The main price behind this new kind of photography is the sacrifice of some image resolution to collect directional ray information. However, it is possible to smoothly vary the optical configuration from the light field camera back to a conventional camera by reducing the separation between the microlenses and photosensor. This allows a selectable trade-off between image resolution and refocusing power. More importantly, current semiconductor technology is already capable of producing sensors with an order of magnitude more resolution than we need in final images. The extra ray directional information enables unprecedented capabilities after exposure. For example, it is possible to compute final photographs that are refocused at different depths, or that have extended depth of field, by re-sorting the recorded light rays appropriately. Theory predicts, and experiments corroborate, that blur due to incorrect focus can be reduced by a factor approximately equal to the directional resolution of the recorded light rays. Similarly, digital correction of lens aberrations re-sorts aberrant light rays to where they should ideally have converged, improving image contrast and resolution. Future cameras based on these principles will be physically simpler, capture light more quickly, and provide greater flexibility in finishing photographs.