A Multi-Exposure Image Fusion Based on the Adaptive Weights Reflecting the Relative Pixel Intensity and Global Gradient

This paper presents a new multi-exposure fusion algorithm. The conventional approach is to define a weight map for each of the multi-exposure images, and then obtain the fusion image as their weighted sum. Most of existing methods focused on finding weight functions that assign larger weights to the pixels in better-exposed regions. While the conventional methods apply the same function to each of the multi-exposure images, we propose a function that considers all the multi-exposure images simultaneously to reflect the relative intensity between the images and global gradients. Specifically, we define two kinds of weight functions for this. The first is to measure the importance of a pixel value relative to the overall brightness and neighboring exposure images. The second is to reflect the importance of a pixel value when it is in a range with relatively large global gradient compared to other exposures. The proposed method needs modest computational complexity owing to the simple weight functions, and yet it achieves visually pleasing results and gets high scores according to an image quality measure.

[1]  Jiebo Luo,et al.  Probabilistic Exposure Fusion , 2012, IEEE Transactions on Image Processing.

[2]  Peter J. Burt,et al.  Enhanced image capture through fusion , 1993, 1993 (4th) International Conference on Computer Vision.

[3]  Bo Gu,et al.  Gradient field multi-exposure images fusion for high dynamic range image visualization , 2012, J. Vis. Commun. Image Represent..

[4]  Wai-kuen Cham,et al.  Gradient-Directed Multiexposure Composition , 2012, IEEE Transactions on Image Processing.

[5]  Ioannis Andreadis,et al.  Multi-Exposure Image Fusion based on Illumination Estimation , 2011 .

[6]  R. Venkatesh Babu,et al.  DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[7]  Manuel M. Oliveira,et al.  Domain transform for edge-aware image and video processing , 2011, SIGGRAPH 2011.

[8]  Subhasis Chaudhuri,et al.  Bilateral Filter Based Compositing for Variable Exposure Photography , 2009, Eurographics.

[9]  Jitendra Malik,et al.  Recovering high dynamic range radiance maps from photographs , 1997, SIGGRAPH.

[10]  Zhou Wang,et al.  Multi-exposure image fusion: A patch-wise approach , 2015, 2015 IEEE International Conference on Image Processing (ICIP).

[11]  Susanto Rahardja,et al.  Detail-Enhanced Exposure Fusion , 2012, IEEE Transactions on Image Processing.

[12]  Kai Zeng,et al.  Perceptual Quality Assessment for Multi-Exposure Image Fusion , 2015, IEEE Transactions on Image Processing.

[13]  A. Ardeshir Goshtasby,et al.  Fusion of multi-exposure images , 2005, Image Vis. Comput..

[14]  Jan Kautz,et al.  Exposure Fusion , 2009, 15th Pacific Conference on Computer Graphics and Applications (PG'07).

[15]  Shutao Li,et al.  Image Fusion With Guided Filtering , 2013, IEEE Transactions on Image Processing.

[16]  Jian Sun,et al.  Guided Image Filtering , 2010, ECCV.

[17]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[18]  Shutao Li,et al.  Fast multi-exposure image fusion with median filter and recursive filter , 2012, IEEE Transactions on Consumer Electronics.

[19]  E. Reinhard Photographic Tone Reproduction for Digital Images , 2002 .

[20]  Alexei A. Efros,et al.  Fast bilateral filtering for the display of high-dynamic-range images , 2002 .