Infrared Technology, Volume. 46, Issue 12, 1362(2024)

Infrared and Visible Image Fusion Based on Deep Image Decomposition

Chaoyang CHEN1,2 and Yuanyuan JIANG1、*
Author Affiliations
  • 1College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan 232000, China
  • 2Institute of Environment-friendly Materials and Occupational Health, Anhui University of Science and Technology, Wuhu 241003, China
  • show less

    Infrared and visible light image fusion is an enhancement technique designed to create a fused image that retains the advantages of the source image. In this study, a depth image decomposition-based infrared and visible image fusion method is proposed. First, the source image is decomposed into the background feature map and detail feature map by the encoder; simultaneously, the saliency feature extraction module is introduced in the encoder to highlight the edge and texture features of the source image; subsequently, the fused image is obtained by the decoder. In the training process, a gradient coefficient penalty was applied to the visible image for regularized reconstruction to ensure texture consistency, and a loss function was designed for image decomposition and reconstruction to reduce the differences between the background feature maps and amplify the differences between the detail feature maps. The experimental results show that the method can generate fused images with rich details and bright targets. In addition, this method outperforms other comparative methods in terms of subjective and objective evaluations of the TNO and FLIR public datasets.

    Tools

    Get Citation

    Copy Citation Text

    CHEN Chaoyang, JIANG Yuanyuan. Infrared and Visible Image Fusion Based on Deep Image Decomposition[J]. Infrared Technology, 2024, 46(12): 1362

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Mar. 20, 2024

    Accepted: Jan. 14, 2025

    Published Online: Jan. 14, 2025

    The Author Email: JIANG Yuanyuan (jyyLL672@163.com)

    DOI:

    CSTR:32186.14.

    Topics