In a multiple-view image acquisition process, color consistency is not ensured. This is an important problem for image fusion tasks: object texturing or mosaics blending for example. In automatic mode, the camera adapts its settings --shutter-speed and aperture-- to the captured image content. Therefore the color of objects changes over an image sequence. In order to restore the color consistency, a transformation model between reference and observed colors have to be estimated. It introduces two main problems: the data selection (common pixels between images) and the estimation of a reliable color transformation between those pixels. While most techniques ensure only pairwise consistency and possibly proceed incrementally, we address the problem globally on the entire photo collection. We propose a global multi-view color consistency solution that in a first step selects robustly the common color information between images and in a second step estimates the color transformations that set all pictures in a common color reference, which involves a global minimization. Our compact representation enables to process large image datasets efficiently.
[1]
Erik Reinhard,et al.
Color Transfer between Images
,
2001,
IEEE Computer Graphics and Applications.
[2]
Dani Lischinski,et al.
Non-rigid dense correspondence with applications for image enhancement
,
2011,
ACM Trans. Graph..
[3]
Richard Szeliski,et al.
Finding paths through the world's photos
,
2008,
ACM Trans. Graph..
[4]
Renaud Marlet,et al.
Virtual Line Descriptor and Semi-Local Matching Method for Reliable Feature Correspondence
,
2012
.
[5]
Renaud Marlet,et al.
Virtual Line Descriptor and Semi-Local Graph Matching Method for Reliable Feature Correspondence
,
2012,
BMVC.