Extreme Learning Machine based exposure fusion for displaying HDR scenes

We know that fusion rule in spatial domain-based multiple exposure fusion methods, the sum weighted average is usually used in which same weight value is assigned for each source image, regardless of the details contained in it. Furthermore, using only single feature to design the fusion rule is also commonly adopted. However, utilizing single feature to measure the quality of one image is not comprehensive. As a result, the detail losing and contrast reduction are caused by these rules. In the paper, In order to use multiple features extracted from one image simultaneously to obtain an adaptive weight value for the image, we propose an exposure fusion method called (ELM_EF). It is based on a regression method called Extreme Learning Machine (ELM). Firstly, we construct input vector for ELM using contrast, saturation and exposedness features from the chosen representative blocks. The label of input is obtained by using a Gaussian function with exposure setting of the image served as a parameter. Thus, training model can be got. Secondly, the statistic values of these features about each tested image are calculated, it is used for deciding the weight value of corresponding image with the training model. Experiments show that the proposed method can preserve more details and contrast than the sum weighted average method. Moreover, it can give comparative or even better results compared to other typical exposure fusion methods.

[1]  Bernhard Schölkopf,et al.  A tutorial on support vector regression , 2004, Stat. Comput..

[2]  Peter J. Burt,et al.  Enhanced image capture through fusion , 1993, 1993 (4th) International Conference on Computer Vision.

[3]  Jinhua Wang,et al.  Exposure fusion based on steerable pyramid for displaying high dynamic range scenes , 2009 .

[4]  A. Ardeshir Goshtasby,et al.  Fusion of multi-exposure images , 2005, Image Vis. Comput..

[5]  Jan Kautz,et al.  Exposure Fusion , 2007, 15th Pacific Conference on Computer Graphics and Applications (PG'07).

[6]  Ramesh Raskar,et al.  Image fusion for context enhancement and video surrealism , 2004, NPAR '04.

[7]  Maneesh Agrawala,et al.  Multiscale shape and detail enhancement from multi-light image collections , 2007, SIGGRAPH 2007.

[8]  Ramesh Raskar,et al.  Image Fusion for Context Enhancement , 2003 .

[9]  Jitendra Malik,et al.  Recovering high dynamic range radiance maps from photographs , 1997, SIGGRAPH.

[10]  Hongming Zhou,et al.  Extreme Learning Machine for Regression and Multiclass Classification , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[11]  Hiroshi Nagahashi,et al.  Cross-Parameterization for Triangular Meshes with Semantic Features , 2007 .

[12]  Vladimir Petrovic,et al.  Objective image fusion performance measure , 2000 .

[13]  Michael Wimmer,et al.  Evaluation of HDR tone mapping methods using essential perceptual attributes , 2008, Comput. Graph..

[14]  Kin-Man Lam,et al.  An adaptive algorithm for the display of high-dynamic range images , 2007, J. Vis. Commun. Image Represent..