Local Image Fusion Using Dispersion Minimisation

On the spatial domain, image fusion can be approached by estimating a set of fusion weights, that optimally measure the contribution of each pixel in source images to the fused one. Combining fusion weights with source images yields the fused result with improved visual perception. This paper aims to find these weights by minimising a constant-modulus (CM) cost function that describes the dispersion of the fused image. In order to accelerate convergence rate and avoid spurious solutions, we also introduce optimal learning rates while updating fusion weights. Experimental results reveal that our scheme provides comparable performance on fusing multifocus images to multi-scale wavelet methods, such as shift-invariant discrete wavelet transform (SI-DWT)