Infrared and Laser Engineering, Volume. 51, Issue 12, 20220125(2022)

A review of deep learning fusion methods for infrared and visible images

Lin Li, Hongmei Wang, and Chenkai Li
Author Affiliations
  • School of Astronautics, Northwestern Polytechnical University, Xi’an 710072, China
  • show less
    References(105)

    [1] J Ma, Y Ma, C Li. Infrared and visible image fusion methods and applications: A survey. Information Fusion, 45, 153-178(2019).

    [2] J Ma, C Chen, C Li. Infrared and visible image fusion via gradient transfer and total variation minimization. Information Fusion, 31, 100-109(2016).

    [3] Ying Shen, Chunhong Huang, Feng Huang, et al. Research progress of infrared and visible image fusion technology. Journal of Infrared and Millimeter Waves, 50, 20200467(2021).

    [4] X Ji, G Zhang. Image fusion method of SAR and infrared image based on curvelet transform with adaptive weighting. Multimedia Tools and Applications, 76, 17633-17649(2017).

    [5] [5] Li H, Zhou Y T, Chellappa R. SARIR sens image fusion realtime implementation[C]Conference Recd of The TwentyNinth Asilomar Conference on Signals, Systems Computers. IEEE, 1995, 2: 11211125.

    [6] [6] Ye Y, Zhao B, Tang L. SAR visible image fusion based on local nonnegative matrix factization[C]2009 9th International Conference on Electronic Measurement & Instruments. IEEE, 2009: 42634266.

    [7] [7] Ali M A, Clausi D A. Automatic registration of SAR visible b remote sensing images[C]IEEE International Geoscience Remote Sensing Symposium. IEEE, 2002, 3: 13311333.

    [8] [8] Parmar K, Kher R K, Thakkar F N. Analysis of CT MRI image fusion using wavelet transfm[C]2012 International Conference on Communication Systems wk Technologies. IEEE, 2012: 124127.

    [9] X Liu, W Mei, H Du. Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion. Neurocomputing, 235, 131-139(2017).

    [10] J Ma, W Yu, P Liang, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion. Information Fusion, 48, 11-26(2019).

    [11] L Bai, W Zhang, X Pan, et al. Underwater image enhancement based on global and local equalization of histogram and dual-image multi-scale fusion. IEEE Access, 8, 128973-128990(2020).

    [12] M Rashid, M A Khan, M Alhaisoni, et al. A sustainable deep learning framework for object recognition using multi-layers deep features fusion and selection. Sustainability, 12, 5037(2020).

    [13] Cong Tang, Yongshun Ling, Hua Yang, et al. Decision-level fusion detection for infrared and visible spectra based on deep learning. Journal of Infrared and Millimeter Waves, 48, 0626001(2019).

    [14] Y Shen. RGBT bimodal twin tracking network based on feature fusion. Journal of Infrared and Millimeter Waves, 50, 20200459(2021).

    [15] [15] Adamchuk V I, Rossel R V, Sudduth K A, et al. Sens fusion f precision agriculture [M]Thomas C. Sens FusionFoundation Applications. Rijeka, Croatia: InTech, 2011: 2740.

    [16] Z Wang, G Li, X Jiang. Flood disaster area detection method based on optical and SAR remote sensing image fusion. Journal of Radar, 9, 539-553(2020).

    [17] Xie Yang, Tao Tong, Songyan Lu, et al. Fusion of infrared and visible images based on multi-features. Optical Precision Engineering, 22, 489-496(2014).

    [18] J Chen, X Li, L Luo, et al. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Information Sciences, 508, 64-78(2020).

    [19] Y Liu, J Jin, Y Wang, et al. Region level based multi-focus image fusion using quaternion wavelet and normalized cut. Signal Process, 97, 9-30(2014).

    [20] Hao Chen, Yanjie Wang. Research on image fusion algorithm based on laplace pyramid transform. Laser & Infrared, 39, 439-442(2009).

    [21] M Choi, R Y Kim, M R Nam, et al. Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geoscience and Remote Sensing Letters, 2, 136-140(2005).

    [22] B Yang, S Li. Multifocus image fusion and restoration with sparse representation. IEEE Transactions on Instrumen-tation and Measurement, 59, 884-892(2009).

    [23] Y Liu, S Liu, Z Wang. A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion, 24, 147-164(2015).

    [24] Xianhong Liu, Zhibin Chen, Mengze Qin. Fusion of infrared and visible light images combined with guided filtering and convolutional sparse representation. Optical Precision Engineering, 26, 1242-1253(2018).

    [25] [25] Du X, ElKhamy M, Lee J, et al. Fused DNN: A deep neural wk fusion approach to fast robust pedestrian detection[C]2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2017: 953961.

    [26] Jindun Dai, Yadong Liu, Xianyin Mao, et al. Infrared and visible image fusion based on FDST and dual-channel PCNN. Infrared and Laser Engineering, 48, 0204001(2019).

    [27] Z Fu, X Wang, J Xu, et al. Infrared and visible images fusion based on RPCA and NSCT. Infrared Physics & Technology, 77, 114-123(2016).

    [28] N Mitianoudis, T Stathaki. Pixel-based and region-based image fusion schemes using ICA bases. Information Fusion, 8, 131-142(2007).

    [29] W Kong, Y Lei, H Zhao. Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization. Infrared Physics & Technology, 67, 161-172(2014).

    [30] A Wang, M Wang. RGB-D salient object detection via minimum barrier distance transform and saliency fusion. IEEE Signal Processing Letters, 24, 663-667(2017).

    [31] Xin Wang, Tongbo Ji, Fu Liu. Fusion of infrared and visible light images combined with object extraction and compressed Sensing. Optical Precision Engineering, 24, 1743-1753(2016).

    [32] Xiaorong Cui, Tao Shen, Jianlu Huang, et al. Infrared and visible image fusion based on BEMD and improved visual saliency. Infrared Technology, 42, 1061(2020).

    [33] J J Lewis, R J O’Callghan, S G Nikolov, et al. Pixel-and region-based image fusion with complex wavelets. Information Fusion, 8, 119-130(2007).

    [34] [34] Rajkumar S, Mouli P C. Infrared visible image fusion using entropy neurofuzzy concepts[C]ICT Critical Infrastructure: Proceedings of the 48th Annual Convention of Computer Society of IndiaVol I. Cham: Springer, 2014: 93100.

    [35] J Zhao, G Cui, X Gong, et al. Fusion of visible and infrared images using global entropy and gradient constrained regularization. Infrared Physics & Technology, 81, 201-209(2017).

    [36] C Sun, C Zhang, N Xiong. Infrared and visible image fusion techniques based on deep learning: A review. Electronics, 9, 2162(2020).

    [37] J Ma, C Chen, C Li, et al. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion, 31, 100-109(2016).

    [38] Y Lecun, L Bottou, Y Bengio, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278-2324(1998).

    [39] [39] Krizhevsky A, Sutskever I, Hinton G E. Image classification with deep convolutional neural wks[C]Proceedings of the 25th International Conference on Neural Infmation Processing Systems, Lake Tahoe, NV, USA. 2012: 1097–1105.

    [40] Y Liu, X Chen, H Peng, et al. Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191-207(2017).

    [41] [41] Li H, Wu X J, Kittler J. Infrared visible image fusion using a deep learning framewk[C]2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018: 27052710.

    [42] [42] Simonyan K, Zisserman A. Very deep convolutional wks f largescale image recognition[EBOL]. (20140904)[20330223]. https:arxiv.gabs1409.1556.

    [43] Y Liu, X Chen, J Cheng, et al. Infrared and visible image fusion with convolutional neural networks. International Journal of Wavelets, Multiresolution and Information Processing, 16, 1850018(2018).

    [44] H Li, X J Wu, T S Durrani. Infrared and visible image fusion with ResNet and zero-phase component analysis. Infrared Physics & Technology, 102, 103039(2019).

    [45] Y Cui, H Du, W Mei. Infrared and visible image fusion using detail enhanced channel attention network. IEEE Access, 7, 182185-182197(2019).

    [46] Y Zhang, Y Liu, P Sun, et al. IFCNN: A general image fusion framework based on convolutional neural network. Information Fusion, 54, 99-118(2020).

    [47] R Hou, D Zhou, R Nie, et al. VIF-Net: An unsupervised framework for infrared and visible image fusion. IEEE Transactions on Computational Imaging, 6, 640-651(2020).

    [48] [48] He K, Zhang X, Ren S, et al. Deep residual learning f image recognition[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. 2016: 770778.

    [49] [49] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional wks[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. 2017: 47004708.

    [50] L Li, Z Xia, H Han, et al. Infrared and visible image fusion using a shallow CNN and structural similarity constraint. IET Image Processing, 14, 3562-3571(2020).

    [51] Y Li, J Wang, Z Miao, et al. Unsupervised densely attention network for infrared and visible image fusion. Multimedia Tools and Applications, 79, 34685-34696(2020).

    [52] Y Long, H Jia, Y Zhong, et al. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion. Information Fusion, 69, 128-141(2021).

    [53] J Zhu, Q Dou, L Jian, et al. Multiscale channel attention network for infrared and visible image fusion. Concurrency and Computation: Practice and Experience, 33, e6155(2021).

    [54] H Xu, J Ma, J Jiang, et al. U2 fusion: A unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 502-518(2020).

    [55] [55] Xu H, Ma J, Le Z, et al. Fusiondn: A unified densely connected wk f image fusion[C]Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(7): 1248412491.

    [56] [56] Prabhakar K R, Srikar V S, Babu R V. Deepfuse: A deep unsupervised approach f exposure fusion with extreme exposure image pairs[C]Proceedings of the IEEE International Conference on Computer Vision, 2017: 47144722.

    [57] H Li, X J Wu. DenseFuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing, 28, 2614-2623(2018).

    [58] [58] Lin T Y, Maire M, Belongie S, et al. Microsoft coco: Common objects in context[C]European Conference on Computer Vision. Cham: Springer, 2014: 740755.

    [59] [59] Zhao Z, Xu S, Zhang C, et al. DIDFuse: Deep image decomposition f infrared visible image fusion [EBOL]. (20200320)[20220223] . https:arxiv.gabs200309210.

    [60] Y Pan, D Pi, I A Khan, et al. DenseNetFuse: A study of deep unsupervised DenseNet to infrared and visual image fusion. Journal of Ambient Intelligence and Humanized Computing, 12, 10339-10351(2021).

    [61] H Wang, W An, L Li, et al. Infrared and visible image fusion based on multi‐channel convolutional neural network. IET Image Processing, 16, 1575-1584(2022).

    [62] L Liu, M Chen, M Xu, et al. Two-stream network for infrared and visible images fusion. Neurocomputing, 460, 50-58(2021).

    [63] H Li, X J Wu, T Durrani. NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Transactions on Instrumentation and Measurement, 69, 9645-9656(2020).

    [64] H Li, X J Wu, J Kittler. RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Information Fusion, 73, 72-86(2021).

    [65] Z Wang, J Wang, Y Wu, et al. UNFusion: A unified multi-scale densely connected network for infrared and visible image fusion. IEEE Transactions on Circuits and Systems for Video Technology, 32, 3360-3374(2021).

    [66] [66] Fu Y, Wu X J. A dualbranch wk f infrared visible image fusion[C]2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021: 1067510680.

    [67] [67] Zhang H, Xu H, Xiao Y, et al. Rethinking the image fusion: A fast unified image fusion wk based on proptional maintenance of gradient intensity[C]Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(7): 1279712804.

    [68] L Jian, X Yang, Z Liu, et al. SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion. IEEE Transactions on Instrumentation and Measurement, 70, 1-15(2020).

    [69] F Zhao, W Zhao, L Yao, et al. Self-supervised feature adaption for infrared and visible image fusion. Information Fusion, 76, 189-203(2021).

    [70] J Ma, L Tang, M Xu, et al. STDFusionNet: An infrared and visible image fusion network based on salient target detection. IEEE Transactions on Instrumentation and Measurement, 70, 1-13(2021).

    [71] A Raza, J Liu, Y Liu, et al. IR-MSDNet: Infrared and visible image fusion based on infrared features and multiscale dense network. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 3426-3437(2021).

    [72] L Tang, J Yuan, J Ma. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network. Information Fusion, 82, 28-42(2022).

    [73] [73] Goodfellow I, PougetAbadie J, Mirza M, et al. Generative adversarial s [C]Proceedings of the 27th International Conference on Neural Infmation Processing Systems, 2014: 26722680.

    [74] [74] Wei L, Zhang S, Gao W, et al. Person transfer gan to bridge domain gap f person reidentification[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, 2018: 7988.

    [75] [75] Li J, Liang X, Wei Y, et al. Perceptual generative adversarial wks f small object detection[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, 2017: 12221230.

    [76] J Rabbi, N Ray, M Schubert, et al. Small-object detection in remote sensing images with end-to-end edge-enhanced GAN and object detector network. Remote Sensing, 12, 1432(2020).

    [77] Y Fu, X J Wu, T Durrani. Image fusion based on generative adversarial network consistent with perception. Information Fusion, 72, 110-125(2021).

    [78] Y Yang, J Liu, S Huang, et al. Infrared and visible image fusion via texture conditional generative adversarial network. IEEE Transactions on Circuits and Systems for Video Technology, 31, 4771-4783(2021).

    [79] J Ma, H Xu, J Jiang, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Transactions on Image Processing, 29, 4980-4995(2020).

    [80] Q Li, L Lu, Z Li, et al. Coupled GAN with relativistic discriminators for infrared and visible images fusion. IEEE Sensors Journal, 21, 7458-7467(2019).

    [81] S Li, X Kang, J Hu. Image fusion with guided filtering. IEEE Transactions on Image Processing, 22, 2864-2875(2013).

    [82] J Ma, P Liang, W Yu, et al. Infrared and visible image fusion via detail preserving adversarial learning. Information Fusion, 54, 85-98(2020).

    [83] J Xu, X Shi, S Qin, et al. LBP-BEGAN: A generative adver-sarial network architecture for infrared and visible image fusion. Infrared Physics & Technology, 104, 103144(2020).

    [84] J Li, H Huo, K Liu, et al. Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance. Information Sciences, 529, 28-41(2020).

    [85] J Li, H Huo, C Li, et al. AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks. IEEE Transactions on Multimedia, 23, 1383-1396(2020).

    [86] X Yang, H Huo, J Li, et al. DSG-fusion: Infrared and visible image fusion via generative adversarial networks and guided filter. Expert Systems with Applications, 200, 116905(2022).

    [87] J Li, H Huo, C Li, et al. Multigrained attention network for infrared and visible image fusion. IEEE Transactions on Instrumentation and Measurement, 70, 1-12(2020).

    [88] J Ma, H Zhang, Z Shao, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Transactions on Instrumentation and Measurement, 70, 1-14(2020).

    [89] J Hou, D Zhang, W Wu, et al. A generative adversarial network for infrared and visible image fusion based on semantic segmentation. Entropy, 23, 376(2021).

    [90] [90] Zhou H, Wu W, Zhang Y, et al. Semanticsupervised infrared visible image fusion via a dualdiscriminat generative adversarial wk[JOL]. IEEE Transactions on Multimedia(Early Access), (20211122)[20220223]. https:ieeexple.ieee.gdocument9623476.

    [91] [91] Chen L C, Zhu Y, Papreou G, et al. Encoderdecoder with atrous separable convolution f semantic image segmentation[C]Proceedings of the European Conference on Computer Vision (ECCV), 2018: 801818.

    [92] J W Roberts, Aardt J A Van, F B Ahmed. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing, 2, 023522(2008).

    [93] [93] Wang Z, Simoncelli E P, Bovik A C. Multiscale structural similarity f image quality assessment[C]The ThritySeventh Asilomar Conference on Signals, Systems & Computers, 2003. IEEE, 2003, 2: 13981402.

    [94] Y J Rao. In-fibre Bragg grating sensors. Measurement Science and Technology, 8, 355(1997).

    [95] G Qu, D Zhang, P Yan. Information measure for performance of image fusion. Electronics Letters, 38, 313-315(2002).

    [96] A M Eskicioglu, P S Fisher. Image quality measures and their performance. IEEE Transactions on Communications, 43, 2959-2965(1995).

    [97] W Guo, N Xiong, H C Chao, et al. Design and analysis of self-adapted task scheduling strategies in wireless sensor networks. Sensors, 11, 6533-6554(2011).

    [98] G Cui, H Feng, Z Xu, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications, 341, 199-209(2015).

    [99] Y Han, Y Cai, Y Cao, et al. A new image fusion performance metric based on visual information fidelity. Information Fusion, 14, 127-135(2013).

    [100] C S Xydeas, V Petrovic. Objective image fusion performance measure. Electronics Letters, 36, 308-309(2000).

    [101] M Deshmukh, U Bhosale. Image fusion and image quality assessment of fused images. International Journal of Image Processing (IJIP), 4, 484(2010).

    [102] V Aslantas, E Bendes. A new image quality metric for image fusion: The sum of the correlations of differences. Aeu-International Journal of Electronics and Communications, 69, 1890-1896(2015).

    [103] M B A Haghighat, A Aghagolzadeh, H Seyedarabi. A non-reference image fusion metric based on mutual information of image features. Computers & Electrical Engineering, 37, 744-756(2011).

    [104] A Toet. The TNO multiband image data collection. Data in Brief, 15, 249-251(2017).

    [105] [105] Davis J W, Sharma V. OTCBVS benchmark dataset collection[EBOL]. (2007)[20220223]. http:www. cse. ohiostate. eduotcbvsbench.

    Tools

    Get Citation

    Copy Citation Text

    Lin Li, Hongmei Wang, Chenkai Li. A review of deep learning fusion methods for infrared and visible images[J]. Infrared and Laser Engineering, 2022, 51(12): 20220125

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image processing

    Received: Feb. 23, 2022

    Accepted: --

    Published Online: Jan. 10, 2023

    The Author Email:

    DOI:10.3788/IRLA20220125

    Topics