Infrared Technology, Volume. 47, Issue 7, 895(2025)

Visible and Infrared Image Fusion for Road Crack Detection

Sihao ZHAO1,2, Feng WANG3, Juanjuan YANG3, Yang PANG3, and Jianwu DANG1,2、*
Author Affiliations
  • 1School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China
  • 2National Virtual Simulation Experimental Teaching Center for Railway Transportation Information and Control, Lanzhou 730070, China
  • 3Gansu Luqiao Feiyu Transportation Facilities Co., Ltd., Lanzhou 730070, China
  • show less
    References(15)

    [1] [1] GUO Z, WANG L, YANG W, et al. LDFnet: lightweight dynamic fusion network for face forgery detection by integrating local artifacts and global texture information[J].IEEE Transactions on Circuits and Systems for Video Technology, 2024,34: 1255-1265.

    [2] [2] Zhuravlev A A, Aksyonov K A. Comparison of contour detection methods in images on the example of photos with road surface damage[C]//2023IEEE Ural-Siberian Conference on Biomedical Engineering, Radioelectronics and Information Technology(USBEREIT), 2023: 183-186.

    [3] [3] LI X S, ZHOU F Q, TAN H S, et al. Multi-focus image fusion based on nonsubsampled contour let transform and residual removal[J].Signal Processing, 2021,184: 108062.

    [4] [4] Prabhakar K R, Srikar V S, Babu R V. Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//2017IEEE International Conference on Computer Vision, 2017: 4724-4732.

    [5] [5] MA Jiayi, YU Wei, LIANG Pengwei, et al. Fusiongan: a generative adversarial network for infrared and visible image fusion[J].Information Fusion, 2018,48: 11-26.

    [6] [6] MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion[J].IEEE Transactions on Instrumentation and Measurement, 2021,70: 1-14.

    [7] [7] SHI Weibo, NIU Dongyu, LI Zirui, et al. Effective contact texture region aware pavement skid resistance prediction via convolutional neural network[J].Computer-Aided Civil and Infrastructure Engineering, 2023,39: 2054-5070.

    [8] [8] LI Hui, WU Xiaojun, Durrani T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J].IEEE Transactions on Instrumentation and Measurement, 2020,69(12): 9645-9656.

    [10] [10] XU B, LI S, YANG S, et al. MSPIF: multi-stage progressive visible and infrared image fusion with structures preservation[J].Infrared Physics & Technology, 2023,133: 104848.

    [14] [14] YANG Lingxiao, ZHANG Ruyuan, LI Lida, et al. Simam: a simple, parameter-free attention module for convolutional neural networks[C]//Proceedings of the International Conference on Machine Learning, 2021: 11863-11874.

    [15] [15] Jumiawi W A H, El Zaart A. Otsu thresholding model using heterogeneous mean filters for precise images segmentation[C]//2022International Conference of Advanced Technology in Electronic and Electrical Engineering, IEEE, 2022: 1-6.

    [16] [16] ZHANG Hao, MA Jiayi. SDNet: a versatile squeeze-and-decomposition network for real-time image fusion[J].International Journal of Computer Vision, 2021,129: 2761-2785.

    [17] [17] LI Hui, WU Xiaojun. DenseFuse: a fusion approach to infrared and visible images[C]//IEEE Transactions on Image Processing, 2018,28(5): 2614- 2623.

    [18] [18] TANG Linfeng, YUAN Jiteng, ZHANG Hao, et al. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware[J].Information Fusion, 2022,83: 79-92.

    [19] [19] XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: a unified unsupervised image fusion network[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020,44(1): 502-518.

    Tools

    Get Citation

    Copy Citation Text

    ZHAO Sihao, WANG Feng, YANG Juanjuan, PANG Yang, DANG Jianwu. Visible and Infrared Image Fusion for Road Crack Detection[J]. Infrared Technology, 2025, 47(7): 895

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Feb. 29, 2024

    Accepted: Aug. 12, 2025

    Published Online: Aug. 12, 2025

    The Author Email: DANG Jianwu (dangjw@mail.lzjtu.cn)

    DOI:

    Topics