Laser & Optoelectronics Progress, Volume. 62, Issue 10, 1037008(2025)

Pulse-Coupled Dual Adversarial Learning Network for Infrared and Visible Image Fusion

Jia Zhao1,2, Yuelan Xin1,2、*, Jizhao Liu3, and Qingqing Wang4
Author Affiliations
  • 1School of Physical and Electronic Information Engineering, Qinghai Normal University, Xining 810001, Qinghai , China
  • 2State Key Laboratory of Tibetan Intelligent Information Processing and Application, Xining 810001, Qinghai , China
  • 3School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, Gansu , China
  • 4Qinghai Meteorological Information Center, Xining 810001, Qinghai , China
  • show less
    Figures & Tables(14)
    Dual-discriminator generative adversarial network structure
    Infrared and visible image fusion architecture with self-attention and cross-attention mechanisms. (a) Generator fusion process; (b) self-attention block; (c) cross-attention block
    Feature extraction and reconstruction modules of fusion architecture. (a) Feature extraction module; (b) image reconstruction module
    Discriminator architecture
    Pulse-coupled neural network model
    Qualitative comparison results of ten methods on eight pairs of images from TNO and M3FD datasets
    Distribution curves of images from TNO dataset, and a point x,y on the curve represents that (100 × x)% of the images does not exceed the metric value y
    Distribution curves of images from M3FD dataset, and a point x,y on the curve represents that (100 × x)% of the images does not exceed the metric value y
    Qualitative comparison results of ten methods on RoadScene dataset
    Distribution curves of images from RoadScene dataset and a point x,y on the curve represents that (100 × x)% of the images does not exceed the metric value y
    Visualization results in ablation experiments
    Object detection results
    • Table 1. Performance comparison of different fusion methods (the best values are highlighted in bold, and the second-best values are underlined)

      View table

      Table 1. Performance comparison of different fusion methods (the best values are highlighted in bold, and the second-best values are underlined)

      MethodYearSize /MBFLOPS /109Average running time in TNO /sPlatform
      BF320201.376MATLAB
      CrossFuse9202423.3180.960.588PyTorch
      DenseFuse2120190.305.7660.188PyTorch
      FusionGAN420190.93121.9640.490TensorFlow
      FECFusion2320230.1518.80.118PyTorch
      FLFuse-Net2420220.0092.390.646PyTorch
      NSST-PCNN1220196.301MATLAB
      Res2Fusion2220220.396.4180.820PyTorch
      YDTR2520230.2252.11.061PyTorch
      PDGAN202422.3049.330.259PyTorch
    • Table 2. Quantitative comparison results of ablation experiment(The best values are highlighted in bold)

      View table

      Table 2. Quantitative comparison results of ablation experiment(The best values are highlighted in bold)

      MethodENSDMIVIF
      w/o DT5.9208.3082.2950.524
      w/o DD6.6489.2492.6510.726
      w/o DT,DD6.4559.1892.4710.662
      w/o DPCNN6.7778.9142.6690.722
      w/o CCFM6.5508.7422.3940.674
      PDGAN6.9099.5512.7490.790
    Tools

    Get Citation

    Copy Citation Text

    Jia Zhao, Yuelan Xin, Jizhao Liu, Qingqing Wang. Pulse-Coupled Dual Adversarial Learning Network for Infrared and Visible Image Fusion[J]. Laser & Optoelectronics Progress, 2025, 62(10): 1037008

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Digital Image Processing

    Received: Oct. 22, 2024

    Accepted: Nov. 26, 2024

    Published Online: Apr. 27, 2025

    The Author Email: Yuelan Xin (xinyue001112@163.com)

    DOI:10.3788/LOP242143

    CSTR:32186.14.LOP242143

    Topics