Optics and Precision Engineering, Volume. 33, Issue 7, 1152(2025)
Infrared and visible image fusion based on multi-scale spatial attention complementary
Current infrared and visible image fusion methods tend to introduce excessive redundant infrared information, impairing the ability to balance complex scene details and resulting in suboptimal fusion outcomes. To address these limitations, a novel fusion approach based on multi-scale spatial attention complementarity is proposed. This method employs a dual-branch convolutional network to separately extract features from infrared and visible images, followed by difference-based complementary processing. Multi-scale spatial attention mechanisms are then applied to the feature maps, culminating in regression-based superposition to achieve balanced fusion of complementary features. Experimental evaluations demonstrate that, compared to mainstream methods such as Densefuse and PIAFusion, the proposed approach achieves improvements of 4.1% and 4.3% in mutual information (MI), and 5.0% and 2.3% in visual information fidelity (VIF), respectively. These results indicate enhanced retention of target features and effective suppression of redundant information within complex scenes. The method exhibits strong feature balancing capabilities and holds significant potential for applications in target detection and recognition under challenging environmental conditions.
Get Citation
Copy Citation Text
Yongxing ZHANG, Bowen LIAN, Naiting GU, Fangzhao LI, Yang LI. Infrared and visible image fusion based on multi-scale spatial attention complementary[J]. Optics and Precision Engineering, 2025, 33(7): 1152
Category:
Received: Dec. 31, 2024
Accepted: --
Published Online: Jun. 23, 2025
The Author Email: Naiting GU (gnt7328@163.com)