Global Multiple-View Color Consistency

In a multiple-view image acquisition process, color consistency is not ensured. This is an important problem for image fusion tasks: object texturing or mosaics blending for example. In automatic mode, the camera adapts its settings --shutter-speed and aperture-- to the captured image content. Therefore the color of objects changes over an image sequence. In order to restore the color consistency, a transformation model between reference and observed colors have to be estimated. It introduces two main problems: the data selection (common pixels between images) and the estimation of a reliable color transformation between those pixels. While most techniques ensure only pairwise consistency and possibly proceed incrementally, we address the problem globally on the entire photo collection. We propose a global multi-view color consistency solution that in a first step selects robustly the common color information between images and in a second step estimates the color transformations that set all pictures in a common color reference, which involves a global minimization. Our compact representation enables to process large image datasets efficiently.