Pas-Mef: Multi-Exposure Image Fusion Based On Principal Component Analysis, Adaptive Well-Exposedness And Saliency Map

High dynamic range (HDR) imaging enables to immortalize natural scenes similar to the way that they are perceived by human observers. With regular low dynamic range (LDR) capture/display devices, significant details may not be preserved in images due to the huge dynamic range of natural scenes. To minimize the information loss and produce high quality HDR-like images for LDR screens, this study proposes an efficient multi-exposure fusion (MEF) approach with a simple yet effective weight extraction method relying on principal component analysis, adaptive well-exposedness and saliency maps. These weight maps are later refined through a guided filter and the fusion is carried out by employing a pyramidal decomposition. Experimental comparisons with existing techniques demonstrate that the proposed method produces very strong statistical and visual results.

[1]  John K. Tsotsos,et al.  Saliency, attention, and visual search: an information theoretic approach. , 2009, Journal of vision.

[2]  Peter J. Burt,et al.  Enhanced image capture through fusion , 1993, 1993 (4th) International Conference on Computer Vision.

[3]  Nikolaos Mitianoudis,et al.  Multiple-Exposure Image Fusion for HDR Image Synthesis Using Learned Analysis Transformations , 2019, J. Imaging.

[4]  Jian Sun,et al.  Guided Image Filtering , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Hui Li,et al.  Fast Multi-Scale Structural Patch Decomposition for Multi-Exposure Image Fusion , 2020, IEEE Transactions on Image Processing.

[6]  Heng Tao Shen,et al.  Principal Component Analysis , 2009, Encyclopedia of Biometrics.

[7]  Muhammad Imran,et al.  Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter , 2019, J. Vis. Commun. Image Represent..

[8]  Shutao Li,et al.  Fast multi-exposure image fusion with median filter and recursive filter , 2012, IEEE Transactions on Consumer Electronics.

[9]  Shutao Li,et al.  Pixel-level image fusion: A survey of the state of the art , 2017, Inf. Fusion.

[10]  Mehmet Türkan,et al.  Multi-exposure image fusion based on linear embeddings and watershed masking , 2021, Signal Process..

[11]  Christof Koch,et al.  Image Signature: Highlighting Sparse Salient Regions , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Zhou Wang,et al.  Perceptual Evaluation for Multi-Exposure Image Fusion of Dynamic Scenes , 2020, IEEE Transactions on Image Processing.

[13]  Erik Reinhard,et al.  Real Time Automated Tone Mapping System for HDR Video , 2012, ICIP 2012.

[14]  Jan Kautz,et al.  Exposure Fusion , 2009, 15th Pacific Conference on Computer Graphics and Applications (PG'07).

[15]  Henry Leung,et al.  Variable augmented neural network for decolorization and multi-exposure fusion , 2019, Inf. Fusion.

[16]  Lei Zhang,et al.  Robust Multi-Exposure Image Fusion: A Structural Patch Decomposition Approach , 2017, IEEE Transactions on Image Processing.

[17]  Nam Ik Cho,et al.  A Multi-Exposure Image Fusion Based on the Adaptive Weights Reflecting the Relative Pixel Intensity and Global Gradient , 2018, 2018 25th IEEE International Conference on Image Processing (ICIP).

[18]  Lei Zhang,et al.  Multi-Exposure Fusion with CNN Features , 2018, 2018 25th IEEE International Conference on Image Processing (ICIP).

[19]  Kai Zeng,et al.  Perceptual Quality Assessment for Multi-Exposure Image Fusion , 2015, IEEE Transactions on Image Processing.