Laser & Optoelectronics Progress, Volume. 60, Issue 24, 2410002(2023)

Infrared and Visible Image Fusion Based on Separate Expression of Mutual Information Features

Hui Wang1,2,3, Xiaoqing Luo1,2,3、*, and Zhancheng Zhang4
Author Affiliations
  • 1School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, Jiangsu, China
  • 2Institute of Advanced Technology, Jiangnan University, Wuxi 214122, Jiangsu, China
  • 3Jiangsu Laboratory of Pattern Recognition and Computational Intelligence, Wuxi 214122, Jiangsu, China
  • 4School of Electronics and Information Engineering, Suzhou University of Science and Technology, Suzhou 215000, Jiangsu, China
  • show less
    Figures & Tables(16)
    Overall framework for fusion network
    Encoder construction. (a) Mutual information encoder; (b) general convolutional encoder
    Hierarchical feature visualisation. (a1) IR; (a2) VIS; (b1) C1IR; (b2) C1VIS; (c1) C2IR; (c2) C2VIS; (d1) R1IR; (d2) R1VIS; (e1) R2IR; (e2) R2VIS
    HAFF module structure
    Visualisation of HAFF module features and parameters. (a) IR; (b) VIS; (c) F1; (d) α; (e) F2; (f) β; (g) F3; (h) γ; (i) F'
    Experimental results comparing the TNO dataset. (a) IR; (b) VIS; (c) GTF; (d) GANMcC; (e) GAN-FM; (f) SDNet;(g) Densefuse; (h) DRF; (i) IFSepR; (j) ours
    Experimental results comparing the RoadScene dataset. (a) IR; (b) VIS; (c) GTF; (d) GANMcC; (e) GAN-FM; (f) SDNet; (g) Densefuse; (h) DRF; (i) IFSepR; (j) ours
    Fusion results of different weight values for a pair of images in the TNO dataset. (a) IR; (b) VIS; (c) w=0; (d) w=0.1; (e) w=0.2; (f) w=0.3; (g) w=0.4; (h) w=0.5; (i) w=0.6; (j) w=0.7; (k) w=0.8; (l) w=0.9; (m) w=1.0; (n) ours
    Fusion results of each method in MRI-CT medical images. (a) MR-T1; (b) CT; (c) GTF; (d) GANMcC; (e) GAN-FM; (f) SDNet; (g) Densefuse; (h) DRF; (i) IFSepR; (j) ours
    • Table 1. Objective evaluation metrics for each method on the TNO dataset 43 pairs of images

      View table

      Table 1. Objective evaluation metrics for each method on the TNO dataset 43 pairs of images

      MethodSDENDFEIAGSFQpMI
      GTF39.37666.77003.537427.54372.76667.03160.195413.5402
      GANMcC30.48766.21672.278820.12461.93344.64480.136312.4336
      GAN-FM28.60886.53633.575027.32762.73046.93130.194513.0727
      SDNet33.05356.70424.728539.90123.93599.38480.276213.4086
      Densefuse35.64076.88173.637230.77803.01167.11470.303913.7635
      DRF9.70895.03720.78287.87560.72201.60820.112710.0745
      IFSepR26.94536.49003.344628.02752.77787.19730.329512.9799
      Ours43.62297.16505.745953.22665.058610.79250.499514.3300
    • Table 2. Objective evaluation metrics for each method on the RoadScene dataset 221 pairs of images

      View table

      Table 2. Objective evaluation metrics for each method on the RoadScene dataset 221 pairs of images

      MethodSDENDFEIAGSFQpMI
      GTF53.05657.50133.962635.34733.35529.44920.232115.0027
      GANMcC38.62926.87193.810435.65413.33658.04080.186313.7437
      GAN-FM38.32017.03264.869141.55563.968710.42860.246714.0652
      SDNet44.97987.31607.141764.09146.092615.18150.399814.6320
      Densefuse42.37397.17085.307546.34514.420211.27490.393814.3417
      DRF17.49625.84091.329313.35121.22382.77640.083511.6818
      IFSepR33.43376.88435.395444.75334.408313.21580.351313.7686
      Ours55.24877.663210.831198.78329.308721.36380.503415.3265
    • Table 3. Objective evaluation metrics for different fusion methods on 43 pairs of images from the TNO dataset

      View table

      Table 3. Objective evaluation metrics for different fusion methods on 43 pairs of images from the TNO dataset

      FusionSDENDFEIAGSFQpMI
      Addition39.12007.07085.288448.92694.63869.83040.416614.1415
      Multiplication37.85137.04315.105247.28074.48359.39570.385614.0862
      Concation42.35497.16004.891246.28314.33709.28440.295614.3200
      ASFF42.83987.12845.595151.07745.021510.00650.329514.2568
      Ours43.62297.16505.745953.22665.058610.79250.499514.3300
    • Table 4. Objective evaluation metrics for different weighting values on 43 pairs of images from the TNO dataset

      View table

      Table 4. Objective evaluation metrics for different weighting values on 43 pairs of images from the TNO dataset

      MethodSDENDFEIAGSFQpMI
      ω=044.72917.20985.281550.09104.689110.07340.333814.3197
      ω=0.141.67977.12814.710744.70714.19019.08970.280914.2563
      ω=0.236.96137.00594.737043.45404.11828.97650.271114.0118
      ω=0.337.30717.05194.659843.65284.09208.81870.212114.1038
      ω=0.435.33886.99104.757343.84564.12998.97380.220113.9821
      ω=0.534.66216.98004.933244.95834.25839.22210.205813.9599
      ω=0.631.20206.86594.901843.37844.14689.04740.183513.7318
      ω=0.733.34936.94735.052645.30674.30519.36880.201813.8947
      ω=0.835.13977.01414.835943.78224.16028.94260.196214.0282
      ω=0.931.42736.86635.164945.84454.37269.56810.189813.7325
      ω=1.034.22506.97264.942543.76644.16849.15980.199113.9451
      Ours43.62297.16505.745953.22665.058610.79250.499514.3300
    • Table 5. Objective evaluation indicators for different balance parameter values on the TNO dataset 43 pairs of images

      View table

      Table 5. Objective evaluation indicators for different balance parameter values on the TNO dataset 43 pairs of images

      λSDENDFEIAGSFQpMI
      λ=0.139.88366.87824.947750.32094.15799.10120.394613.7563
      λ=0.240.96966.92915.030152.47834.24339.34330.398013.8582
      λ=0.342.60616.99384.932251.98824.176810.12700.404213.9877
      λ=0.441.21487.01415.089052.25994.30559.40510.403114.0281
      λ=0.541.86577.00445.136853.00344.37329.45560.405714.0088
      λ=0.642.78496.92105.016850.33674.242410.26950.390213.8420
      λ=0.743.62297.16505.745953.22665.058610.79250.499514.3300
      λ=0.842.67386.96035.069649.11834.299110.31940.415013.9206
      λ=0.940.57546.99865.449247.22774.54379.95410.408113.9972
      λ=1.039.14737.02214.970749.83224.25519.29190.408214.0441
    • Table 6. Objective evaluation index of each method on 15 pairs of MRI-CT medical images

      View table

      Table 6. Objective evaluation index of each method on 15 pairs of MRI-CT medical images

      MethodSDENDFEIAGSFQpMI
      GTF60.68574.45356.657757.87385.632821.70960.02648.9071
      GANMcC48.09534.54933.974537.36913.526411.16500.00959.0986
      GAN-FM58.45415.88335.084543.89184.213018.63760.018611.7667
      SDNet62.48635.12347.972770.88446.882323.50170.030810.2468
      Densefuse63.01414.44075.430247.44444.596817.68040.02958.8815
      DRF23.52084.43530.975910.00410.91462.66190.02138.8707
      IFSepR54.78935.23388.707689.15547.069240.31960.011310.4677
      Ours63.32366.63268.742074.83567.326123.13680.027713.2653
    • Table 7. Comparison of the efficiency of models based on deep learning methods

      View table

      Table 7. Comparison of the efficiency of models based on deep learning methods

      ParameterGANMcCGAN-FMSDNetDenseFuseDRFIFSepROurs
      Parameter number /1068.68059.0140.2564.245183.636s1.9608.893
      FPS /(frame·s-11.21014.28526.31638.6101.1120.806217
    Tools

    Get Citation

    Copy Citation Text

    Hui Wang, Xiaoqing Luo, Zhancheng Zhang. Infrared and Visible Image Fusion Based on Separate Expression of Mutual Information Features[J]. Laser & Optoelectronics Progress, 2023, 60(24): 2410002

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image Processing

    Received: Feb. 13, 2023

    Accepted: Apr. 7, 2023

    Published Online: Dec. 4, 2023

    The Author Email: Luo Xiaoqing (xqluo@jiangnan.edu.cn)

    DOI:10.3788/LOP230855

    Topics