Image Fusion Based on Combined Multi-scale Decomposition and Improved Sparse Representation

A new method of combined multi-scale decomposition and improved sparse representation (MSD-ISR) used in image fusion is proposed in this paper which has three advantages. As is known that it is hard to determine the decomposition level and it may achieve low contrast in multi-scale decomposition. The proposed MSD-ISR method can solve this problem and on the other hand, we get a higher efficiency in the process of sparse decomposition as well as better preserve the information of the source image. Experimental results show that the proposed MSD-ISR method can achieve better fusion results. Introduction A method of image fusion based on the combination of multi-scale decomposition and improved sparse representation is proposed. On one hand, the sparse representation method is applied in image fusion based on multi-scale transform which can combine the advantages of sparse representation and multi-scale decomposition. It can solve the problem of decomposition level as well as low contrast of transform base image fusion [1, 2]. As a result, the proposed method can better preserve the information in the source images. On the other hand, a whole image is used in the process of sparse decomposition but not blocks of image, which can raise the efficiency of sparse decomposition [3]. Five indicators are selected namely information entropy (E), standard deviation (SD), mutual information (MI), mean gradient (AG) and QABF as the objective evaluation index. Results show that the proposed method performed in both the objective and subjective index. The Proposed MSD-ISR Method The Framework of MSD-ISR A new method of combined multi-scale decomposition and improved sparse representation (MSD-ISR) used in image fusion is proposed in this paper. We choose the coefficients which have larger gradient in the process of choosing low frequency and choose the larger coefficients of high frequency. The flowchart is shown as follow (Refer with: Figure 1). The fusion steps are as following: Step 1: Get the train_im g . Choose the pixels which have larger area gradient between source images as the pixels of the train image. Step 2: Decompose both the source images and train image. Using multi-scale decomposition methods to decomposition the source images and train image respectively to get the high frequency and low frequency. Step 3: Obtain the low frequency reconstruction coefficients and the high frequency reconstruction coefficients using sparse representation. The high frequency and low frequency of the train image are used to train the decomposed coefficients of the source images to get the reconstruction coefficients.

[1]  Ying Zhu,et al.  A multi-focus image fusion algorithm using modified adaptive PCNN model , 2016, 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD).

[2]  Yu Liu,et al.  A general framework for image fusion based on multi-scale transform and sparse representation , 2015, Inf. Fusion.

[3]  Xiuxia Ji An Improved Image Fusion Method of Infrared Image and SAR Image Based on Contourlet and Sparse Representation , 2015, 2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics.

[4]  Xiuxia Ji,et al.  CT and MR Images Fusion Method Based on Nonsubsampled Contourlet Transform , 2016, 2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC).

[5]  Baohua Zhang,et al.  Multi-focus image fusion based on sparse decomposition and background detection , 2016, Digit. Signal Process..