Optics and Precision Engineering, Volume. 32, Issue 10, 1567(2024)
Infrared image and visible image fusion algorithm based on secondary image decomposition
In view of the serious detail loss, the feature information of infrared image is not highlighted and the semantic information of source image is ignored in the fusion of infrared image and visible image, a fusion network of infrared image and visible image based on secondary image decomposition was proposed. The encoder was used to decompose the source image twice to extract the feature information of different scales, then the two-element attention was used to assign weights to the feature information of different scales, the global semantic branch is introduced, the pixel addition method was used as the fusion strategy, and the fusion image was reconstructed by the decoder. In the experiment, FLIR data set was selected for training, TNO and RoadScene data sets were used for testing, and eight objective evaluation parameters of image fusion were selected for comparative analysis. The image fusion experiment of TNO data set shows that in terms of information entropy, standard deviation, spatial frequency, visual fidelity, average gradient and difference correlation coefficient, SIDFuse is 12.2%, 9.0%, 90.2%, 13.9%, 85.1% , 16.8%,6.7%,30.7% higher than DenseFuse, the classical fusion algorithm based on convolutional networks, respectively. Compared with the latest fusion network LRRNet, the average increase is 2.5%, 5.6%, 31.5%, 5.4%, 25.2% , 17.9%,7.5%,20.7 respectively. It can be seen that the image fusion algorithm proposed in this paper has a high contrast, and can retain the detail texture of visible image and the feature information of infrared image more effectively at the same time, which has obvious advantages in similar methods.
Get Citation
Copy Citation Text
Xin MA, Chunyu YU, Yixin TONG, Jun ZHANG. Infrared image and visible image fusion algorithm based on secondary image decomposition[J]. Optics and Precision Engineering, 2024, 32(10): 1567
Category:
Received: Sep. 26, 2023
Accepted: --
Published Online: Jul. 8, 2024
The Author Email: YU Chunyu (yucy@njupt.edu.cn)