Laser & Optoelectronics Progress, Volume. 60, Issue 16, 1610013(2023)
Infrared and Visible Image Fusion with Convolutional Neural Network and Transformer
[1] He Z F, Chen G C, Chen J S et al. Multi-scale feature fusion lightweight real-time infrared pedestrian detection at night[J]. Chinese Journal of Lasers, 49, 1709002(2022).
[2] Li C, Yang D D, Song P et al. Global-aware siamese network for thermal infrared object tracking[J]. Acta Optica Sinica, 41, 0615002(2021).
[3] Feng Y F, Yin H, Lu H Q et al. Infrared and visible light image fusion method based on improved fully convolutional neural network[J]. Computer Engineering, 46, 243-249, 257(2020).
[4] Chen C Q, Meng X C, Shao F et al. Infrared and visible image fusion method based on multiscale low-rank decomposition[J]. Acta Optica Sinica, 40, 1110001(2020).
[5] Tang C Y, Pu S L, Ye P Z et al. Fusion of low-illuminance visible and near-infrared images based on convolutional neural networks[J]. Acta Optica Sinica, 40, 1610001(2020).
[6] Liu Y, Chen X, Wang Z F et al. Deep learning for pixel-level image fusion: recent advances and future prospects[J]. Information Fusion, 42, 158-173(2018).
[7] Burt P, Adelson E. The Laplacian pyramid as a compact image code[J]. IEEE Transactions on Communications, 31, 532-540(1983).
[8] Toet A. Hierarchical image fusion[J]. Machine Vision and Applications, 3, 1-11(1990).
[9] Nencini F, Garzelli A, Baronti S et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion, 8, 143-156(2007).
[10] Lewis J J, O’Callaghan R J, Nikolov S G et al. Pixel- and region-based image fusion with complex wavelets[J]. Information Fusion, 8, 119-130(2007).
[11] Zhao C H, Guo Y T, Wang Y L. A fast fusion scheme for infrared and visible light images in NSCT domain[J]. Infrared Physics & Technology, 72, 266-275(2015).
[12] Naidu V P S, Raol J R. Pixel-level image fusion using wavelets and principal component analysis[J]. Defence Science Journal, 58, 338-352(2008).
[13] Liu Y, Chen X, Ward R K et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 23, 1882-1886(2016).
[14] Ma J L, Zhou Z Q, Wang B et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 82, 8-17(2017).
[15] Fu Z Z, Wang X, Li X F et al. Infrared and visible image fusion based on visual saliency and NSCT[J]. Journal of University of Electronic Science and Technology of China, 46, 357-362(2017).
[16] Huang W, Jing Z L. Evaluation of focus measures in multi-focus image fusion[J]. Pattern Recognition Letters, 28, 493-500(2007).
[17] Jiang Z T, He Y T. Infrared and visible image fusion method based on convolutional auto-encoder and residual block[J]. Acta Optica Sinica, 39, 1015001(2019).
[18] Li H, Wu X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 28, 2614-2623(2019).
[19] Li H, Wu X J, Durrani T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 69, 9645-9656(2020).
[20] Ma J Y, Yu W, Liang P W et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 48, 11-26(2019).
[21] Xu H, Liang P W, Yu W et al. Learning a generative model for fusing infrared and visible images via conditional generative adversarial network with dual discriminators[C], 3954-3960(2019).
[22] Vaswani A, Shazeer N, Parmar N et al. Attention is all you need[C], 6000-6010(2017).
[24] Xu H, Ma J Y, Jiang J J et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 502-518(2022).
[25] Zhang Y, Liu Y, Sun P et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 54, 99-118(2020).
[26] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C], 7132-7141(2018).
[27] Woo S, Park J, Lee J Y et al. CBAM: convolutional block attention module[M]. Ferrari V, Hebert M, Sminchisescu C, et al. Computer vision-ECCV 2018. Lecture notes in computer science, 11211, 3-19(2018).
[29] Liu Z, Lin Y T, Cao Y et al. Swin transformer: hierarchical vision transformer using shifted windows[C], 9992-10002(2021).
[30] Wang Z, Bovik A C, Sheikh H R et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 13, 600-612(2004).
[31] Lin T Y, Maire M, Belongie S et al. Microsoft COCO: common objects in context[M]. Fleet D, Pajdla T, Schiele B, et al. Computer vision-ECCV 2014. Lecture notes in computer science, 8693, 740-755(2014).
[33] Zuo Y J, Liu J H, Bai G B et al. Airborne infrared and visible image fusion combined with region segmentation[J]. Sensors, 17, 1127(2017).
[34] Toet A. Image fusion by a ratio of low-pass pyramid[J]. Pattern Recognition Letters, 9, 245-253(1989).
[35] Qu G H, Zhang D L, Yan P F et al. Medical image fusion by wavelet transform modulus maxima[J]. Optics Express, 9, 184-190(2001).
[36] Li H, Wu X J, Durrani T S. Infrared and visible image fusion with ResNet and zero-phase component analysis[J]. Infrared Physics & Technology, 102, 103039(2019).
[37] Fu Y, Wu X J. A dual-branch network for infrared and visible image fusion[C], 10675-10680(2021).
[38] Ma J Y, Zhang H, Shao Z F et al. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 5005014(2021).
[39] Liu Z, Blasch E, Xue Z Y et al. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34, 94-109(2012).
[40] Hu L M, Gao J, He K F. Research on quality measures for image fusion[J]. Acta Electronica Sinica, 32, 218-221(2004).
[41] Aslantas V, Bendes E. A new image quality metric for image fusion: the sum of the correlations of differences[J]. AEU-International Journal of Electronics and Communications, 69, 1890-1896(2015).
[42] Xydeas C S, Petrović V. Objective image fusion performance measure[J]. Electronics Letters, 36, 308-309(2000).
Get Citation
Copy Citation Text
Yang Yang, Zhennan Ren, Beichen Li. Infrared and Visible Image Fusion with Convolutional Neural Network and Transformer[J]. Laser & Optoelectronics Progress, 2023, 60(16): 1610013
Category: Image Processing
Received: Aug. 12, 2022
Accepted: Oct. 27, 2022
Published Online: Aug. 18, 2023
The Author Email: Ren Zhennan (Ren2151311@163.com)