Spectroscopy and Spectral Analysis, Volume. 45, Issue 7, 2034(2025)

Infrared and Visible Image Fusion Based on Improved Latent Low-Rank and Unsharp Masks

FENG Zhun-ruo1, LI Yun-hong1、*, CHEN Wei-zhong1, SU Xue-ping1, CHEN Jin-ni1, LI Jia-peng1, LIU Huan1, and LI Shi-bo2
Author Affiliations
  • 1School of Electronic Information, Xi'an Polytechnic University, Xi'an 710048, China
  • 2School of Science, Xi'an Polytechnic University, Xi'an 710048, China
  • show less

    To address the challenges of incomplete salient information extraction and detail degradation in infrared and visible light image fusion under low-light conditions, we propose an enhanced fusion algorithm that integrates Latent Low-Rank Representation (LatLRR) with Anisotropic Diffusion-Based Unsharp Mask(ADUSM). Initially, we apply block-wise segmentation and vectorization to the infrared and visible images, subsequently inputting them into the LatLRR model. Through an inverse reconstruction operation, we extract low-rank components from the infrared images and obtain basic salient components from the visible images. Next, the basic salient components undergo processing with ADUSM for pixel-wise differencing, allowing for further decomposition into deep salient detail components and multi-level detail features. Subsequently, the low-rank components are fused utilizing a visual saliency map rule, which enhances the retention and visibility of salient targets in the resultant fused image. For the deep salient detail components, we employ local entropy maximization for fusion, establishing a maximum activity coefficient to preserve the deep salient details effectively, thereby improving the overall quality and visual richness of the fused image. The multi-level detail features are fused using a weighted average strategy based on maximum spatial frequency, which adapts to the multi-level detail features of the input images, thus enhancing the overall clarity and contrast. Finally, weconduct a comparative analysis of our proposed method against Bayesian, Wavelet, LatLRR, MSVD, and MDLatLRR algorithms using the TNO and M3FD datasets. Experimental results demonstrate that our algorithm significantly outperforms traditional low-rank algorithms in average gradient methods, achieving enhancements of 31%, 2.1%, 4.4%, and 34% in average gradient, information entropy, standard deviation, and spatial frequency metrics. Comprehensive subjective and objective evaluations indicate that the fused images produced by our method not only exhibit rich texture details and clear salient targets but also present substantial advantages over various competing methods. This study effectively addresses the issue of incomplete salient information extraction in low-light environments, exhibiting robust generalization capabilities. The integration of improved Latent Low-Rank and ADUSM filtering is demonstrated to be both effective and feasible in the realm of infrared and visible light image fusion, offering significant scientific contributions to the advancement and application of this technology.

    Tools

    Get Citation

    Copy Citation Text

    FENG Zhun-ruo, LI Yun-hong, CHEN Wei-zhong, SU Xue-ping, CHEN Jin-ni, LI Jia-peng, LIU Huan, LI Shi-bo. Infrared and Visible Image Fusion Based on Improved Latent Low-Rank and Unsharp Masks[J]. Spectroscopy and Spectral Analysis, 2025, 45(7): 2034

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Received: Oct. 30, 2024

    Accepted: Jul. 24, 2025

    Published Online: Jul. 24, 2025

    The Author Email: LI Yun-hong (hitliyunhong@163.com)

    DOI:10.3964/j.issn.1000-0593(2025)07-2034-11

    Topics