Enhancing resolution along multiple imaging dimensions using assorted pixels

Multisampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green, and blue spectral filters found in most solid-state color cameras is one example of multisampled imaging. We briefly describe how multisampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and, hence, image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multisampled images can be better interpolated using local structural models that are learned offline from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are 1) traditional color imaging with a mosaic of color filters, 2) high dynamic range monochrome imaging using a mosaic of exposure filters, and 3) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.

[1]  Shree K. Nayar,et al.  Radiometric self calibration , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[2]  Ronald N. Bracewell,et al.  The Fourier Transform and Its Applications , 1966 .

[3]  Wesley E. Snyder,et al.  Demosaicking methods for Bayer color arrays , 2002, J. Electronic Imaging.

[4]  K. Knop,et al.  A new class of mosaic color encoding patterns for single-chip cameras , 1985, IEEE Transactions on Electron Devices.

[5]  Takeo Kanade,et al.  Limits on super-resolution and how to break them , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[6]  Steve Mann,et al.  ON BEING `UNDIGITAL' WITH DIGITAL CAMERAS: EXTENDING DYNAMIC RANGE BY COMBINING DIFFERENTLY EXPOSED PICTURES , 1995 .

[7]  Moshe Ben-Ezra Segmentation with invisible keying signal , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[8]  K.A. Parulski Color filters and processing alternatives for one-chip cameras , 1985, IEEE Transactions on Electron Devices.

[9]  Bryan C. Russell,et al.  Exploiting the sparse derivative prior for super-resolution , 2003 .

[10]  P.L.P. Dillon,et al.  Fabrication and performance of color filter arrays for solid-state imagers , 1978, IEEE Transactions on Electron Devices.

[11]  William T. Freeman,et al.  Learning Low-Level Vision , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[12]  Shree K. Nayar,et al.  High dynamic range imaging: spatially varying pixel exposures , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).