Infrared and Laser Engineering, Volume. 51, Issue 12, 20220125(2022)
A review of deep learning fusion methods for infrared and visible images
[1] Ma J, Ma Y, Li C. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion, 45, 153-178(2019).
[2] Ma J, Chen C, Li C. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 31, 100-109(2016).
[3] Shen Ying, Huang Chunhong, Huang Feng, et al. Research progress of infrared and visible image fusion technology[J]. Journal of Infrared and Millimeter Waves, 50, 20200467(2021).
[4] Ji X, Zhang G. Image fusion method of SAR and infrared image based on curvelet transform with adaptive weighting[J]. Multimedia Tools and Applications, 76, 17633-17649(2017).
[5] [5] Li H, Zhou Y T, Chellappa R. SARIR sens image fusion realtime implementation[C]Conference Recd of The TwentyNinth Asilomar Conference on Signals, Systems Computers. IEEE, 1995, 2: 11211125.
[6] [6] Ye Y, Zhao B, Tang L. SAR visible image fusion based on local nonnegative matrix factization[C]2009 9th International Conference on Electronic Measurement & Instruments. IEEE, 2009: 42634266.
[7] [7] Ali M A, Clausi D A. Automatic registration of SAR visible b remote sensing images[C]IEEE International Geoscience Remote Sensing Symposium. IEEE, 2002, 3: 13311333.
[8] [8] Parmar K, Kher R K, Thakkar F N. Analysis of CT MRI image fusion using wavelet transfm[C]2012 International Conference on Communication Systems wk Technologies. IEEE, 2012: 124127.
[9] Liu X, Mei W, Du H. Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion[J]. Neurocomputing, 235, 131-139(2017).
[10] Ma J, Yu W, Liang P, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 48, 11-26(2019).
[11] Bai L, Zhang W, Pan X, et al. Underwater image enhancement based on global and local equalization of histogram and dual-image multi-scale fusion[J]. IEEE Access, 8, 128973-128990(2020).
[12] Rashid M, Khan M A, Alhaisoni M, et al. A sustainable deep learning framework for object recognition using multi-layers deep features fusion and selection[J]. Sustainability, 12, 5037(2020).
[13] Tang Cong, Ling Yongshun, Yang Hua, et al. Decision-level fusion detection for infrared and visible spectra based on deep learning[J]. Journal of Infrared and Millimeter Waves, 48, 0626001(2019).
[14] Shen Y. RGBT bimodal twin tracking network based on feature fusion[J]. Journal of Infrared and Millimeter Waves, 50, 20200459(2021).
[15] [15] Adamchuk V I, Rossel R V, Sudduth K A, et al. Sens fusion f precision agriculture [M]Thomas C. Sens FusionFoundation Applications. Rijeka, Croatia: InTech, 2011: 2740.
[16] Wang Z, Li G, Jiang X. Flood disaster area detection method based on optical and SAR remote sensing image fusion[J]. Journal of Radar, 9, 539-553(2020).
[17] Yang Xie, Tong Tao, Lu Songyan, et al. Fusion of infrared and visible images based on multi-features[J]. Optical Precision Engineering, 22, 489-496(2014).
[18] Chen J, Li X, Luo L, et al. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition[J]. Information Sciences, 508, 64-78(2020).
[19] Liu Y, Jin J, Wang Y, et al. Region level based multi-focus image fusion using quaternion wavelet and normalized cut[J]. Signal Process, 97, 9-30(2014).
[20] Chen Hao, Wang Yanjie. Research on image fusion algorithm based on laplace pyramid transform[J]. Laser & Infrared, 39, 439-442(2009).
[21] Choi M, Kim R Y, Nam M R, et al. Fusion of multispectral and panchromatic satellite images using the curvelet transform[J]. IEEE Geoscience and Remote Sensing Letters, 2, 136-140(2005).
[22] Yang B, Li S. Multifocus image fusion and restoration with sparse representation[J]. IEEE Transactions on Instrumen-tation and Measurement, 59, 884-892(2009).
[23] Liu Y, Liu S, Wang Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 24, 147-164(2015).
[24] Liu Xianhong, Chen Zhibin, Qin Mengze. Fusion of infrared and visible light images combined with guided filtering and convolutional sparse representation[J]. Optical Precision Engineering, 26, 1242-1253(2018).
[25] [25] Du X, ElKhamy M, Lee J, et al. Fused DNN: A deep neural wk fusion approach to fast robust pedestrian detection[C]2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2017: 953961.
[26] Dai Jindun, Liu Yadong, Mao Xianyin, et al. Infrared and visible image fusion based on FDST and dual-channel PCNN[J]. Infrared and Laser Engineering, 48, 0204001(2019).
[27] Fu Z, Wang X, Xu J, et al. Infrared and visible images fusion based on RPCA and NSCT[J]. Infrared Physics & Technology, 77, 114-123(2016).
[28] Mitianoudis N, Stathaki T. Pixel-based and region-based image fusion schemes using ICA bases[J]. Information Fusion, 8, 131-142(2007).
[29] Kong W, Lei Y, Zhao H. Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization[J]. Infrared Physics & Technology, 67, 161-172(2014).
[30] Wang A, Wang M. RGB-D salient object detection via minimum barrier distance transform and saliency fusion[J]. IEEE Signal Processing Letters, 24, 663-667(2017).
[31] Wang Xin, Ji Tongbo, Liu Fu. Fusion of infrared and visible light images combined with object extraction and compressed Sensing[J]. Optical Precision Engineering, 24, 1743-1753(2016).
[32] Cui Xiaorong, Shen Tao, Huang Jianlu, et al. Infrared and visible image fusion based on BEMD and improved visual saliency[J]. Infrared Technology, 42, 1061(2020).
[33] Lewis J J, O’Callghan R J, Nikolov S G, et al. Pixel-and region-based image fusion with complex wavelets[J]. Information Fusion, 8, 119-130(2007).
[34] [34] Rajkumar S, Mouli P C. Infrared visible image fusion using entropy neurofuzzy concepts[C]ICT Critical Infrastructure: Proceedings of the 48th Annual Convention of Computer Society of IndiaVol I. Cham: Springer, 2014: 93100.
[35] Zhao J, Cui G, Gong X, et al. Fusion of visible and infrared images using global entropy and gradient constrained regularization[J]. Infrared Physics & Technology, 81, 201-209(2017).
[36] Sun C, Zhang C, Xiong N. Infrared and visible image fusion techniques based on deep learning: A review[J]. Electronics, 9, 2162(2020).
[37] Ma J, Chen C, Li C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Inf Fusion, 31, 100-109(2016).
[38] Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 86, 2278-2324(1998).
[39] [39] Krizhevsky A, Sutskever I, Hinton G E. Image classification with deep convolutional neural wks[C]Proceedings of the 25th International Conference on Neural Infmation Processing Systems, Lake Tahoe, NV, USA. 2012: 1097–1105.
[40] Liu Y, Chen X, Peng H, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 36, 191-207(2017).
[41] [41] Li H, Wu X J, Kittler J. Infrared visible image fusion using a deep learning framewk[C]2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018: 27052710.
[42] [42] Simonyan K, Zisserman A. Very deep convolutional wks f largescale image recognition[EBOL]. (20140904)[20330223]. https:arxiv.gabs1409.1556.
[43] Liu Y, Chen X, Cheng J, et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets, Multiresolution and Information Processing, 16, 1850018(2018).
[44] Li H, Wu X J, Durrani T S. Infrared and visible image fusion with ResNet and zero-phase component analysis[J]. Infrared Physics & Technology, 102, 103039(2019).
[45] Cui Y, Du H, Mei W. Infrared and visible image fusion using detail enhanced channel attention network[J]. IEEE Access, 7, 182185-182197(2019).
[46] Zhang Y, Liu Y, Sun P, et al. IFCNN: A general image fusion framework based on convolutional neural network[J]. Information Fusion, 54, 99-118(2020).
[47] Hou R, Zhou D, Nie R, et al. VIF-Net: An unsupervised framework for infrared and visible image fusion[J]. IEEE Transactions on Computational Imaging, 6, 640-651(2020).
[48] [48] He K, Zhang X, Ren S, et al. Deep residual learning f image recognition[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. 2016: 770778.
[49] [49] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional wks[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. 2017: 47004708.
[50] Li L, Xia Z, Han H, et al. Infrared and visible image fusion using a shallow CNN and structural similarity constraint[J]. IET Image Processing, 14, 3562-3571(2020).
[51] Li Y, Wang J, Miao Z, et al. Unsupervised densely attention network for infrared and visible image fusion[J]. Multimedia Tools and Applications, 79, 34685-34696(2020).
[52] Long Y, Jia H, Zhong Y, et al. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion[J]. Information Fusion, 69, 128-141(2021).
[53] Zhu J, Dou Q, Jian L, et al. Multiscale channel attention network for infrared and visible image fusion[J]. Concurrency and Computation: Practice and Experience, 33, e6155(2021).
[54] Xu H, Ma J, Jiang J, et al. U2 fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 502-518(2020).
[55] [55] Xu H, Ma J, Le Z, et al. Fusiondn: A unified densely connected wk f image fusion[C]Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(7): 1248412491.
[56] [56] Prabhakar K R, Srikar V S, Babu R V. Deepfuse: A deep unsupervised approach f exposure fusion with extreme exposure image pairs[C]Proceedings of the IEEE International Conference on Computer Vision, 2017: 47144722.
[57] Li H, Wu X J. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 28, 2614-2623(2018).
[58] [58] Lin T Y, Maire M, Belongie S, et al. Microsoft coco: Common objects in context[C]European Conference on Computer Vision. Cham: Springer, 2014: 740755.
[59] [59] Zhao Z, Xu S, Zhang C, et al. DIDFuse: Deep image decomposition f infrared visible image fusion [EBOL]. (20200320)[20220223] . https:arxiv.gabs200309210.
[60] Pan Y, Pi D, Khan I A, et al. DenseNetFuse: A study of deep unsupervised DenseNet to infrared and visual image fusion[J]. Journal of Ambient Intelligence and Humanized Computing, 12, 10339-10351(2021).
[61] Wang H, An W, Li L, et al. Infrared and visible image fusion based on multi‐channel convolutional neural network[J]. IET Image Processing, 16, 1575-1584(2022).
[62] Liu L, Chen M, Xu M, et al. Two-stream network for infrared and visible images fusion[J]. Neurocomputing, 460, 50-58(2021).
[63] Li H, Wu X J, Durrani T. NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 69, 9645-9656(2020).
[64] Li H, Wu X J, Kittler J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 73, 72-86(2021).
[65] Wang Z, Wang J, Wu Y, et al. UNFusion: A unified multi-scale densely connected network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 32, 3360-3374(2021).
[66] [66] Fu Y, Wu X J. A dualbranch wk f infrared visible image fusion[C]2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021: 1067510680.
[67] [67] Zhang H, Xu H, Xiao Y, et al. Rethinking the image fusion: A fast unified image fusion wk based on proptional maintenance of gradient intensity[C]Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(7): 1279712804.
[68] Jian L, Yang X, Liu Z, et al. SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 1-15(2020).
[69] Zhao F, Zhao W, Yao L, et al. Self-supervised feature adaption for infrared and visible image fusion[J]. Information Fusion, 76, 189-203(2021).
[70] Ma J, Tang L, Xu M, et al. STDFusionNet: An infrared and visible image fusion network based on salient target detection[J]. IEEE Transactions on Instrumentation and Measurement, 70, 1-13(2021).
[71] Raza A, Liu J, Liu Y, et al. IR-MSDNet: Infrared and visible image fusion based on infrared features and multiscale dense network[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 3426-3437(2021).
[72] Tang L, Yuan J, Ma J. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network[J]. Information Fusion, 82, 28-42(2022).
[73] [73] Goodfellow I, PougetAbadie J, Mirza M, et al. Generative adversarial s [C]Proceedings of the 27th International Conference on Neural Infmation Processing Systems, 2014: 26722680.
[74] [74] Wei L, Zhang S, Gao W, et al. Person transfer gan to bridge domain gap f person reidentification[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, 2018: 7988.
[75] [75] Li J, Liang X, Wei Y, et al. Perceptual generative adversarial wks f small object detection[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, 2017: 12221230.
[76] Rabbi J, Ray N, Schubert M, et al. Small-object detection in remote sensing images with end-to-end edge-enhanced GAN and object detector network[J]. Remote Sensing, 12, 1432(2020).
[77] Fu Y, Wu X J, Durrani T. Image fusion based on generative adversarial network consistent with perception[J]. Information Fusion, 72, 110-125(2021).
[78] Yang Y, Liu J, Huang S, et al. Infrared and visible image fusion via texture conditional generative adversarial network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 31, 4771-4783(2021).
[79] Ma J, Xu H, Jiang J, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 29, 4980-4995(2020).
[80] Li Q, Lu L, Li Z, et al. Coupled GAN with relativistic discriminators for infrared and visible images fusion[J]. IEEE Sensors Journal, 21, 7458-7467(2019).
[81] Li S, Kang X, Hu J. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 22, 2864-2875(2013).
[82] Ma J, Liang P, Yu W, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fusion, 54, 85-98(2020).
[83] Xu J, Shi X, Qin S, et al. LBP-BEGAN: A generative adver-sarial network architecture for infrared and visible image fusion[J]. Infrared Physics & Technology, 104, 103144(2020).
[84] Li J, Huo H, Liu K, et al. Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance[J]. Information Sciences, 529, 28-41(2020).
[85] Li J, Huo H, Li C, et al. AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Transactions on Multimedia, 23, 1383-1396(2020).
[86] Yang X, Huo H, Li J, et al. DSG-fusion: Infrared and visible image fusion via generative adversarial networks and guided filter[J]. Expert Systems with Applications, 200, 116905(2022).
[87] Li J, Huo H, Li C, et al. Multigrained attention network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 1-12(2020).
[88] Ma J, Zhang H, Shao Z, et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 1-14(2020).
[89] Hou J, Zhang D, Wu W, et al. A generative adversarial network for infrared and visible image fusion based on semantic segmentation[J]. Entropy, 23, 376(2021).
[90] [90] Zhou H, Wu W, Zhang Y, et al. Semanticsupervised infrared visible image fusion via a dualdiscriminat generative adversarial wk[JOL]. IEEE Transactions on Multimedia(Early Access), (20211122)[20220223]. https:ieeexple.ieee.gdocument9623476.
[91] [91] Chen L C, Zhu Y, Papreou G, et al. Encoderdecoder with atrous separable convolution f semantic image segmentation[C]Proceedings of the European Conference on Computer Vision (ECCV), 2018: 801818.
[92] Roberts J W, Van Aardt J A, Ahmed F B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2, 023522(2008).
[93] [93] Wang Z, Simoncelli E P, Bovik A C. Multiscale structural similarity f image quality assessment[C]The ThritySeventh Asilomar Conference on Signals, Systems & Computers, 2003. IEEE, 2003, 2: 13981402.
[94] Rao Y J. In-fibre Bragg grating sensors[J]. Measurement Science and Technology, 8, 355(1997).
[95] Qu G, Zhang D, Yan P. Information measure for performance of image fusion[J]. Electronics Letters, 38, 313-315(2002).
[96] Eskicioglu A M, Fisher P S. Image quality measures and their performance[J]. IEEE Transactions on Communications, 43, 2959-2965(1995).
[97] Guo W, Xiong N, Chao H C, et al. Design and analysis of self-adapted task scheduling strategies in wireless sensor networks[J]. Sensors, 11, 6533-6554(2011).
[98] Cui G, Feng H, Xu Z, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications, 341, 199-209(2015).
[99] Han Y, Cai Y, Cao Y, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 14, 127-135(2013).
[100] Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 36, 308-309(2000).
[101] Deshmukh M, Bhosale U. Image fusion and image quality assessment of fused images[J]. International Journal of Image Processing (IJIP), 4, 484(2010).
[102] Aslantas V, Bendes E. A new image quality metric for image fusion: The sum of the correlations of differences[J]. Aeu-International Journal of Electronics and Communications, 69, 1890-1896(2015).
[103] Haghighat M B A, Aghagolzadeh A, Seyedarabi H. A non-reference image fusion metric based on mutual information of image features[J]. Computers & Electrical Engineering, 37, 744-756(2011).
[104] Toet A. The TNO multiband image data collection[J]. Data in Brief, 15, 249-251(2017).
[105] [105] Davis J W, Sharma V. OTCBVS benchmark dataset collection[EBOL]. (2007)[20220223]. http:www. cse. ohiostate. eduotcbvsbench.
Get Citation
Copy Citation Text
Lin Li, Hongmei Wang, Chenkai Li. A review of deep learning fusion methods for infrared and visible images[J]. Infrared and Laser Engineering, 2022, 51(12): 20220125
Category: Image processing
Received: Feb. 23, 2022
Accepted: --
Published Online: Jan. 10, 2023
The Author Email: