Laser & Optoelectronics Progress, Volume. 60, Issue 24, 2411001(2023)

Soft Histogram of Gradients Loss: A loss Function for Optimization of the Image Fusion Networks

Yuxin Long, Wenjie Lai, Huaiyuan Zhang, Hongbo Zhang, Chengshi Li, and Ziji Liu*
Author Affiliations
  • College of Optoelectronic Science and Engineering, University of Electronic Science and Technology, Chengdu 611731, Sichuan, China
  • show less
    References(57)

    [1] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C], 4724-4732(2017).

    [2] Li H, Wu X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 28, 2614-2623(2018).

    [3] Li H, Wu X J, Durrani T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 69, 9645-9656(2020).

    [4] Ma J Y, Yu W, Liang P W et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 48, 11-26(2019).

    [5] Cheng C Y, Wu X J, Xu T Y et al. UNIFusion: a lightweight unified image fusion network[J]. IEEE Transactions on Instrumentation and Measurement, 70, 5016614(2021).

    [6] Xu M L, Tang L F, Zhang H et al. Infrared and visible image fusion via parallel scene and texture learning[J]. Pattern Recognition, 132, 108929(2022).

    [7] Wang Z S, Wu Y Y, Wang J Y et al. Res2Fusion: infrared and visible image fusion based on dense Res2net and double nonlocal attention models[J]. IEEE Transactions on Instrumentation and Measurement, 71, 5005012(2022).

    [8] Wang Z S, Wang J Y, Wu Y Y et al. UNFusion: a unified multi-scale densely connected network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 32, 3360-3374(2022).

    [9] Chen G Y, Wu X J, Xu T Y. Unsupervised infrared image and visible image fusion algorithm based on deep learning[J]. Laser & Optoelectronics Progress, 59, 0410010(2022).

    [10] Yang Y, Liu J X, Huang S Y et al. Infrared and visible image fusion via texture conditional generative adversarial network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 31, 4771-4783(2021).

    [11] Liu J Y, Fan X, Jiang J et al. Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 32, 105-119(2022).

    [12] Jian L H, Yang X M, Liu Z et al. SEDRFuse: a symmetric encoder-decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 5002215(2021).

    [13] Zang Y S, Zhou D M, Wang C C et al. UFA-FUSE: a novel deep supervised and hybrid model for multifocus image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 5008717(2021).

    [15] Yin H T, Zhou W. Multi-scale dilated convolutional neural network based multi-focus image fusion algorithm[J]. Laser & Optoelectronics Progress, 60, 0210003(2023).

    [16] Ma K D, Duanmu Z F, Zhu H W et al. Deep guided learning for fast multi-exposure image fusion[J]. IEEE Transactions on Image Processing, 29, 2808-2819(2019).

    [17] Wang J, Yu L, Tian S W et al. AMFNet: an attention-guided generative adversarial network for multi-model image fusion[J]. Biomedical Signal Processing and Control, 78, 103990(2022).

    [18] Zhang H, Yuan J T, Tian X et al. GAN-FM: infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators[J]. IEEE Transactions on Computational Imaging, 7, 1134-1147(2021).

    [19] Su W J, Huang Y D, Li Q F et al. Infrared and visible image fusion based on adversarial feature extraction and stable image reconstruction[J]. IEEE Transactions on Instrumentation and Measurement, 71, 2510214(2022).

    [20] Ma J Y, Tang L F, Xu M L et al. STDFusionNet: an infrared and visible image fusion network based on salient target detection[J]. IEEE Transactions on Instrumentation and Measurement, 70, 5009513(2021).

    [21] Zhou H B, Hou J L, Zhang Y D et al. Unified gradient- and intensity-discriminator generative adversarial network for image fusion[J]. Information Fusion, 88, 184-201(2022).

    [22] Fu Y, Wu X J, Durrani T. Image fusion based on generative adversarial network consistent with perception[J]. Information Fusion, 72, 110-125(2021).

    [23] Zhu D P, Zhan W D, Jiang Y C et al. IPLF: a novel image pair learning fusion network for infrared and visible image[J]. IEEE Sensors Journal, 22, 8808-8817(2022).

    [24] Fu Y, Wu X J, Kittler J. Effective method for fusing infrared and visible images[J]. Journal of Electronic Imaging, 30, 033013(2021).

    [25] Zhang H Z, Shen Y F, Ou Y Y et al. A GAN-based visible and infrared image fusion algorithm[J]. Proceedings of SPIE, 12061, 120610Z(2021).

    [26] Zhong Z, Yang J F. A novel pig-body multi-feature representation method based on multi-source image fusion[J]. Measurement, 204, 111968(2022).

    [27] Xu H, Ma J Y, Le Z L et al. FusionDN: a unified densely connected network for image fusion[C], 12484-12491(2020).

    [28] Xu H, Ma J Y, Jiang J J et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 502-518(2022).

    [29] Xu H, Wang X Y, Ma J Y. DRF: disentangled representation for visible and infrared image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 5006713(2021).

    [30] Jiang X H, Nie R C, Wang C C et al. DenseNet with orthogonal kernel for infrared and visible image fusion[C], 146-150(2022).

    [32] Jung H, Kim Y, Jang H et al. Unsupervised deep image fusion with structure tensor representations[J]. IEEE Transactions on Image Processing, 29, 3845-3858(2020).

    [33] Wang H F, Wang J Z, Xu H N et al. DRSNFuse: deep residual shrinkage network for infrared and visible image fusion[J]. Sensors, 22, 5149(2022).

    [34] Yang Y, Kong X K, Huang S Y et al. Infrared and visible image fusion based on multiscale network with dual-channel information cross fusion block[C](2021).

    [35] Guo C X, Fan D D, Jiang Z X et al. MDFN: Mask deep fusion network for visible and infrared image fusion without reference ground-truth[J]. Expert Systems with Applications, 211, 118631(2023).

    [36] Li H, Wu X J, Kittler J. RFN-Nest: an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 73, 72-86(2021).

    [37] Dalal N, Triggs B. Histograms of oriented gradients for human detection[C], 886-893(2005).

    [38] Carcagnì P, Del Coco M, Leo M et al. Facial expression recognition and histograms of oriented gradients: a comprehensive study[J]. SpringerPlus, 4, 645(2015).

    [39] Sugiarto B, Prakasa E, Wardoyo R et al. Wood identification based on histogram of oriented gradient (HOG) feature and support vector machine (SVM) classifier[C], 337-341(2018).

    [40] Wei C, Fan H Q, Xie S N et al. Masked feature prediction for self-supervised visual pre-training[C], 14648-14658(2022).

    [41] Zong J J, Qiu T S. Medical image fusion based on sparse representation of classified image patches[J]. Biomedical Signal Processing and Control, 34, 195-205(2017).

    [42] Maggu J, Saini J K, Verma P. FILM.LrTL: FusIng MuLtiFocus IMages using low-rank transform learning[C], 60-65(2022).

    [43] Wang Z, Simoncelli E P, Bovik A C. Multiscale structural similarity for image quality assessment[C], 1398-1402(2004).

    [44] Duda R O, Hart P E, Stork D G[M]. Pattern classification(2000).

    [45] Guo C K. Multi-modal image registration with unsupervised deep learning[D](2019).

    [46] Xu R, Chen Y W, Tang S Y et al. Parzen-window based normalized mutual information for medical image registration[J]. IEICE-Transactions on Information and Systems, E91-D, 132-144(2008).

    [47] Lin T Y, Maire M, Belongie S et al. Microsoft COCO: common objects in context[M]. Fleet D, Pajdla T, Schiele B, et al. Computer vision-ECCV 2014. Lecture notes in computer science, 8693, 740-755(2014).

    [48] Toet A, Hogervorst M A. Progress in color night vision[J]. Optical Engineering, 51, 010901(2012).

    [49] Zhao H, Gallo O, Frosio I et al. Loss functions for image restoration with neural networks[J]. IEEE Transactions on Computational Imaging, 3, 47-57(2017).

    [50] Roberts J W, van Aardt J, Ahmed F. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2, 23522(2008).

    [51] Rao Y J. In-fibre Bragg grating sensors[J]. Measurement Science and Technology, 8, 355(1997).

    [52] Qu G H, Zhang D L, Yan P F. Information measure for performance of image fusion[J]. Electronics Letters, 38, 313-315(2002).

    [53] Aslantas V, Bendes E. A new image quality metric for image fusion: the sum of the correlations of differences[J]. AEU-International Journal of Electronics and Communications, 69, 1890-1896(2015).

    [54] Haghighat M, Razian M A. Fast-FMI: Non-reference image fusion metric[C](2015).

    [55] Xydeas C, Petrovi V. Objective image fusion performance measure[J]. Electronics Letters, 36, 308-309(2000).

    [56] Han Y, Cai Y Z, Cao Y et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 14, 127-135(2013).

    Tools

    Get Citation

    Copy Citation Text

    Yuxin Long, Wenjie Lai, Huaiyuan Zhang, Hongbo Zhang, Chengshi Li, Ziji Liu. Soft Histogram of Gradients Loss: A loss Function for Optimization of the Image Fusion Networks[J]. Laser & Optoelectronics Progress, 2023, 60(24): 2411001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems

    Received: Mar. 16, 2023

    Accepted: Apr. 4, 2023

    Published Online: Nov. 27, 2023

    The Author Email: Liu Ziji (zjliu@uestc.edu.cn)

    DOI:10.3788/LOP230882

    Topics