Acta Photonica Sinica, Volume. 52, Issue 11, 1110003(2023)

Infrared and Visible Image Fusion Based on Dual Channel Residual Dense Network

Xin FENG, Jieming YANG*, Hongde ZHANG, and Guohang QIU
Author Affiliations
  • School of Mechanical Engineering,Key Laboratory of Manufacturing Equipment Mechanism Design and Controlof Chongqing,Chongqing Technology and Business University,Chongqing 400067,China
  • show less
    References(32)

    [1] MA Jiayi, MA Yong, LI Chang. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 45, 153-178(2019).

    [2] ZHANG HAO, XU Han, TIAN Xin et al. Image fusion meets deep learning: a survey and perspective[J]. Information Fusion, 76, 323-336(2021).

    [3] DAS S, ZHANG Yunlong. Color night vision for navigation and surveillance[J]. Transportation Research Record, 1708, 40-46(2000).

    [4] SUN Jiping, FAN Weiqiang. Mine dual-band image fusion in MS-ADoG domain combined with ReNLU and VGG-16[J]. Acta Photonica Sinica, 51, 0310002(2022).

    [5] CAO Yanpeng, GUAN Dayan, HUANG Weilin et al. Pedestrian detection with unsupervised multispectral feature learning using deep neural networks[J]. Information Fusion, 46, 206-217(2019).

    [6] LIU Yu, LIU Shuping, WANG Zengfu. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 24, 147-164(2015).

    [7] MA Jiayi, CHEN Chen, LI Chang et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 31, 100-109(2016).

    [8] LIU Yu, CHEN Xun, HU Peng et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 36, 191-207(2017).

    [9] MA Jiayi, TANG Linfeng, XU Meilong et al. STDFusionNet: an infrared and visible image fusion network based on salient target detection[J]. IEEE Transactions on Instrumentation and Measurement, 70, 1-13(2021).

    [10] MA J, YU W, LIANG P et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 48, 11-26(2019).

    [11] MA Jiayi, XU Han, JIANG Junjun et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 29, 4980-4995(2020).

    [12] MA Jiayi, ZHANG Hao, SHAO Zhenfeng et al. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 1-14(2021).

    [13] LI Hui, WU Xiaojun. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 28, 2614-2623(2019).

    [14] XU Han, WANG Xinya, MA Jiayi. DRF: Disentangled representation for visible and infrared image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 1-13(2021).

    [15] LI Hui, WU Xiaojun, KITTLER J. RFN-Nest: an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 73, 72-86(2021).

    [17] HE Kaiming, ZHANG Xiangyu, REN Shaoqing et al. Deep residual learning for image recognition[C], 770-778(2016).

    [18] HUANG G, LIU Z, VAN DER MAATEN L et al. Densely connected convolutional networks[C], 2261-2269(2017).

    [22] WANG Zhou, BOVIK A C, SHEIKH H R et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 13, 600-612(2004).

    [23] LI Hui, WU Xiao Jun, DURRANI T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 69, 9645-9656(2020).

    [24] LIN T Y, MAIRE M, BELONGIE S et al. Microsoft COCO: common objects in context[C], 740-755(2014).

    [26] MA Kede, Zeng KAI, ZHOU Wang. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Transactions on Image Processing, 24, 3345-3356(2015).

    [27] ASLANTAS V, ENDES E. A new image quality metric for image fusion: The sum of the correlations of differences[J]. Aeu-international Journal of Electronics and Communications, 69, 1890-1896(2015).

    [28] SHEIKH H R, SABIR M F, BOVIK A C. A statistical evaluation of recent full reference image quality assessment algorithms[J]. IEEE Transactions on Image Processing, 15, 3440-3451(2006).

    [29] ZHANG Yu, LIU Yu, SUN Peng et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 54, 99-118(2020).

    [30] ZHANG Hao, XU Han, YANG Xiao et al. Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 12797-12804(2020).

    [31] XU Han, MA Jiayi, JIANG Junjun et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 502-518(2022).

    [32] YUN Jiangrao. In-fibre Bragg grating sensors[J]. Measurement Science and Technology, 8, 355(1997).

    [33] QIU Huihong, ZHANG Dali, YAN Pingfan. Information measure for performance of image fusion[J]. Electronics Letters, 38, 313-315(2002).

    Tools

    Get Citation

    Copy Citation Text

    Xin FENG, Jieming YANG, Hongde ZHANG, Guohang QIU. Infrared and Visible Image Fusion Based on Dual Channel Residual Dense Network[J]. Acta Photonica Sinica, 2023, 52(11): 1110003

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: May. 19, 2023

    Accepted: Jun. 13, 2023

    Published Online: Dec. 22, 2023

    The Author Email: Jieming YANG (2871600119@qq.com)

    DOI:10.3788/gzxb20235211.1110003

    Topics