Infrared Technology, Volume. 45, Issue 3, 266(2023)

Multi-scale Transformer Fusion Method for Infrared and Visible Images

Yanlin CHEN, Zhishe WANG*, Wenyu SHAO, Fan YANG, and Jing SUN
Author Affiliations
  • [in Chinese]
  • show less
    References(28)

    [1] [1] Paramanandham N, Rajendiran K. Multi sensor image fusion for surveillance applications using hybrid image fusion algorithm[J]. Multimedia Tools and Applications, 2018, 77(10): 12405-12436.

    [2] [2] ZHANG Xingchen, YE Ping, QIAO Dan, et al. Object fusion tracking based on visible and infrared images: a comprehensive review[J]. Information Fusion, 2020, 63: 166-187.

    [3] [3] TU Zhengzheng, LI Zhun, LI Chenglong, et al. Multi-interactive dual-decoder for RGB-thermal salient object detection[J]. IEEE Transactions on Image Processing, 2021, 30: 5678-5691.

    [6] [6] LI Hui, WU Xiaojun, Kittle J. MDLatLRR: a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4733-4746.

    [9] [9] KONG Weiwei, LEI Yang, ZHAO Huaixun. Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet local attention models[J]. IEEE Transactions on Instrumentation and transform and fast non-negative matrix factorization[J]. Infrared Physics Measurement, 2022, 71: 1-12.

    [12] [12] LIU Yu, CHEN Xun ,PENG Hu, et al. Multi-focus imagefusion with a deep convolutional neural network[J]. Information Fusion, 2017, 36: 191-207.

    [13] [13] ZHANG Hao, XU Han, TIAN Xin, et al. Image fusion meets deep learning: A survey and perspective[J]. Information Fusion, 2021, 76: 323-336.

    [14] [14] ZHANG Yu, LIU Yu ,SUN Peng, et al. IFCNN: A general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99-118.

    [15] [15] LI Hui, WU Xiaojun. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623.

    [16] [16] LI Hui, WU Xiaojun, Kittler J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72-86.

    [17] [17] JIAN Lihua, YANG Xiaomin, LIU Zheng, et al. SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1-15.

    [18] [18] ZHANG Hao, XU Han, XIAO Yang, et al. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12797-12804.

    [19] [19] WANG Zhishe, WANG Junyao, WU Yuanyuan, et al. UNFusion: a unified multi-scale densely connected network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(6): 3360-3374.

    [20] [20] WANG Zhishe; WU Yuanyuan; WANG Junyao, et al. Res2Fusion: infrared and visible image fusion based on dense Res2net and double nonlocal attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-12.

    [21] [21] MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.

    [22] [22] MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-14.

    [24] [24] LI Jing, ZHU Jianming, LI Chang, et al. CGTF: Convolution-Guided Transformer for Infrared and Visible Image Fusion [J]. IEEE Transactions on Instrumentation and Measurement. 2022, 71: 1‐14.

    [25] [25] RAO Dongyu, WU Xiaojun, XU Tianyang. TGFuse: An infrared and visible image fusion approach based on transformer and generative adversarial network [J/OL]. arXiv preprint arXiv:2201.10147. 2022.

    [26] [26] WANG Zhishe, CHEN Yanlin, SHAO Wenyu, et al. SwinFuse: a residual swin transformer fusion network for infrared and visible images[J/OL]. arXiv preprint arXiv:2204.11436.2022.

    [27] [27] ZHAO Haibo, NIE Rencan. DNDT: infrared and visible image fusion via DenseNet and dual-transformer[C]// International Conference on Information Technology and Biomedical Engineering (ICITBE), 2021: 71-75.

    [28] [28] VS V, Valanarasu J M J, Oza P, et al. Image fusion transformer [J/OL]. arXiv preprint arXiv:2107.09011. 2021.

    [29] [29] LIU Ze, LIN Yutong, CAO Yue, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 10012-10022.

    [30] [30] TOET A. TNO Image Fusion Datase[DB/OL]. [2014-04-26]. https://figshare.com/articles/TN Image Fusion Dataset/1008029.

    [31] [31] XU Han. Roadscene Database[DB/OL]. [2020-08-07]. https://github.com/hanna-xu/RoadScene.

    [32] [32] LI Hui, WU Xiaojun, Kittle J. MDLatLRR: a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4733-4746.

    Tools

    Get Citation

    Copy Citation Text

    CHEN Yanlin, WANG Zhishe, SHAO Wenyu, YANG Fan, SUN Jing. Multi-scale Transformer Fusion Method for Infrared and Visible Images[J]. Infrared Technology, 2023, 45(3): 266

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Aug. 23, 2022

    Accepted: --

    Published Online: Apr. 7, 2023

    The Author Email: Zhishe WANG (wangzs@tyust.edu.cn)

    DOI:

    Topics