Acta Photonica Sinica, Volume. 53, Issue 8, 0810004(2024)
A Dual Branch Edge Convolution Fusion Network for Infrared and Visible Images
Image fusion technology is the process of extracting and integrating complementary information from a set of images, and fusing them into a single image. This process aggregates more effective information, removes redundant information, and enhances the quality of information and scene perception capabilities in the image. Among them, infrared and visible image fusion is a common branch in the field of image fusion and is widely used in the field of image processing. Infrared images can capture hidden heat source targets and have strong anti-interference capabilities. Visible images have rich scene information through reflective imaging. The fusion of the two can complement the rich detail texture information in the visible image and the highlighted target information in the infrared image, obtain a clearer and more accurate description of the scene content, which is beneficial for target recognition and tracking. However, most of the current fusion methods based on deep learning focus on feature extraction and design of loss function, and do not separate public information from modal information, and use the same feature extractor to extract features of different modes without considering the differences between different modes. Based on this, this paper proposes an infrared and visible image fusion method based on a dual-branch edge convolution fusion network. First, based on the dual-branch autoencoder, an improved dual-branch edge convolution structure is proposed, which decomposes the extracted feature information into common information and modality information, and introduces an edge convolution block in each branch to better extract deep features; then a convolutional block attention module is introduced in the fusion layer to enhance the features of different modalities separately for better fusion effect; finally, based on the characteristics of the encoding and decoding network in this paper, a loss function combining reconstruction loss and fusion loss is proposed, which better retains the information of the source image. In order to verify the effectiveness of the proposed method, 10 pairs images were randomly selected on the TNO dataset and the test set of MSRS dataset respectively to test on 6 indicators, such as MSE, SF, CC, PSNR,
Get Citation
Copy Citation Text
Hongde ZHANG, Xin FENG, Jieming YANG, Guohang QIU. A Dual Branch Edge Convolution Fusion Network for Infrared and Visible Images[J]. Acta Photonica Sinica, 2024, 53(8): 0810004
Category:
Received: Jan. 2, 2024
Accepted: Feb. 28, 2024
Published Online: Oct. 15, 2024
The Author Email: FENG Xin (149495263@qq.com)