Infrared Technology, Volume. 47, Issue 5, 619(2025)

Multi-layer Perceptron Interactive Fusion Method for Infrared and Visible Images

Jing SUN1, Zhishe WANG1、*, Fan YANG1, and Zhaofa YU2
Author Affiliations
  • 1School of Applied Science, Taiyuan University of Science and Technology, Taiyuan 030024, China
  • 2Ordnance NCO Academy, Army Engineering University of PLA, Wuhan 430075, China
  • show less
    References(25)

    [2] [2] FENG Z, LAI J, XIE X. Learning modality-specific representations for visible-infrared person re-identification[J].IEEE Transactions on Image Processing, 2020(29): 579-590.

    [4] [4] WANG Z S, XU J W, JIANG X L, et al. Infrared and visible image fusion via hybrid decomposition of NSCT and morphological sequential toggle operator[J].Optik, 2020,201: 1-11.

    [5] [5] LI H, WU X J, Kittler J. MDLatLRR: A novel decomposition method for infrared and visible image fusion[J].IEEE Transactions on Image Processing, 2020,29: 4733-4746.

    [6] [6] WANG Z S, WANG J Y, WU Y Y, et al. UNFusion: A unified multi-scale densely connected network for infrared and visible image fusion[J].IEEE Transactions on Circuits and Systems for Video Technology, 2022,32(6): 3360-3374.

    [7] [7] XU H, ZHANG H, MA J Y. Classification saliency-based rule for visible and infrared image fusion[J].IEEE Transactions on Computational Imaging, 2021(7): 824-836.

    [9] [9] WANG Z S, WU Y Y, WANG J Y, et al. Res2Fusion: Infrared and visible image fusion based on dense Res2net and double non-local attention models[J].IEEE Transactions on Instrumentation and Measurement, 2022,71: 1-12.

    [10] [10] WANG Z S, YANG F, WANG J Y, et al. A dual-path residual attention fusion network for infrared and visible images[J].Optik, 2023,33(7): 3159-3172.

    [11] [11] XU H, MA J Y, JIANG J J, et al. U2Fusion: A unified unsupervised image fusion network[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022,4(11): 502-518.

    [12] [12] LI H, WU X J, KITTLER J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J].Information Fusion, 2021(73): 1566-2535.

    [13] [13] MA J Y, YU W, LIANG P W, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J].Information Fusion, 2019(48): 11-26.

    [14] [14] MA J Y, ZHANG H, SHAO Z F, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021(70): 1-14.

    [16] [16] WANG Z S, SHAO W Y, CHEN Y L, et al. Infrared and visible image fusion via interactive compensatory attention adversarial learning[J].IEEE Transactions on Multimedia, 2023, 25: 7800-7813.

    [17] [17] WANG Z S, SHAO W Y, CHEN Y L, et al. A cross-scale iterative attentional adversarial fusion network for infrared and visible images[J].Transactions on Circuits and Systems for Video Technology, 2023,33(8): 3677-3688.

    [18] [18] Dosovitskiy A, Beyer L, A Kolesnikov, et al. An image is worth 16×16 words: Transformers for image recognition at Scale[J]. ArXiv, abs/2010.11929.

    [19] [19] WANG Z S, CHEN Y L, SHAO W Y, et al. SwinFuse: A residual swin transformer fusion network for infrared and visible images[J].IEEE Transactions on Instrumentation and Measurement, 2022,71: 1-12.

    [20] [20] TANG W, HE F Z, LIU Y. YDTR: Infrared and visible image fusion via Y-shape dynamic transformer[J].IEEE Transactions on Multimedia, 2023,25: 5413-5428.

    [21] [21] TOET A (2014). TNO Image Fusion Dataset. Data[DB/OL]. [2023-12-01]. https://figshare.com/articles/TNO Image Fusion Dataset/1008029.

    [22] [22] TANG L F. MSRS Dataset. Data[DB/OL]. [2023-12-01]. https://github.com/Linfeng-Tang/MSRS.2022.

    [23] [23] ZHENG L, FORSYTH D S, Lagani re R. A feature-based metric for the quantitative evaluation of pixel-level image fusion[J].Computer Vision and Image Understanding, 2008,109(1): 56-68.

    [24] [24] HAN Y, CAI Y Z, CAO Y, et al. A new image fusion performance metric based on visual information fidelity[J].Information Fusion, 2013(14): 127-135.

    [25] [25] ZHOU W, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J].IEEE Transactions on Image Processing, 2004,13(4): 600-612.

    [26] [26] RAO Y J. In-fibre bragg grating sensors[J].Measurement Science and Technology, 1997(8): 355-375.

    [27] [27] QU G H, ZHANG D L, YAN P F. Information measure for performance of image fusion[J].Electronics Letters, 2002,38(7): 313-315.

    [28] [28] PIELLA G, HEIJMANS H. A new quality metric for image fusion[C]//International Conference on Image Processing, 2023: 111-173.

    [29] [29] XYDEAS C, PETROVIC V. Objective image fusion performance measure[J].Electron. Lett., 2000,36: 308-309.

    Tools

    Get Citation

    Copy Citation Text

    SUN Jing, WANG Zhishe, YANG Fan, YU Zhaofa. Multi-layer Perceptron Interactive Fusion Method for Infrared and Visible Images[J]. Infrared Technology, 2025, 47(5): 619

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Dec. 12, 2023

    Accepted: Jul. 3, 2025

    Published Online: Jul. 3, 2025

    The Author Email: WANG Zhishe (wangzs@tyust.edu.cn)

    DOI:

    Topics