Infrared Technology, Volume. 46, Issue 9, 1060(2024)

Infrared and Visible Image Fusion Combining Multi-scale and Convolutional Attention

Yanjie QI and Qinhe HOU*
Author Affiliations
  • School of Electronic Information Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China
  • show less

    A multiscale and convolutional attention-based infrared and visible image fusion algorithm is proposed to address the issues of insufficient single-scale feature extraction and loss of details, such as infrared targets and visible textures, when fusing infrared and visible images. First, an encoder network, combining a multiscale feature extraction module and deformable convolutional attention module, is designed to extract important feature information of infrared and visible images from multiple receptive fields. Subsequently, a fusion strategy based on spatial and channel dual-attention mechanisms is adopted to further fuse the typical features of infrared and visible images. Finally, a decoder network composed of three convolutional layers is used to reconstruct the fused image. Additionally, hybrid loss function constraint network training based on mean squared error, multiscale structure similarity, and color is designed to further improve the similarity between the fused and source images. The results of the experiment are compared with seven image-fusion algorithms using a public dataset. In terms of subjective and objective evaluations, the proposed algorithm exhibits better edge preservation, source image information retention, and higher fusion image quality than other algorithms.

    Tools

    Get Citation

    Copy Citation Text

    QI Yanjie, HOU Qinhe. Infrared and Visible Image Fusion Combining Multi-scale and Convolutional Attention[J]. Infrared Technology, 2024, 46(9): 1060

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Jun. 21, 2023

    Accepted: Jan. 21, 2025

    Published Online: Jan. 21, 2025

    The Author Email: Qinhe HOU (776094677@qq.com)

    DOI:

    Topics