Infrared and Laser Engineering, Volume. 54, Issue 5, 20240592(2025)

Degradation-aware transformer for blind hyperspectral and multispectral image fusion(back cover paper·invited)

Xuheng CAO1,2,3, Xiaopeng HAO1,3, Yusheng LIAN4, Xuquan WANG2, and Xinbin CHENG2
Author Affiliations
  • 1Remote Sensing Calibration Laboratory, National Institute of Metrology, Beijing 100029, China
  • 2Institute of Precision Optical Engineering, School of Physics Science and Engineering, Tongji University, Shanghai 200092, China
  • 3Technology Innovation Center of Infrared Remote Sensing Metrology Technology, State Administration for Market Regulation, Beijing 100029, China
  • 4School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing 102600, China
  • show less

    ObjectiveHyperspectral and multispectral image fusion is a widely used technique to generate high-resolution hyperspectral images (HR-HSI) by combining the spatial details of multispectral images (MSI) with the spectral richness of hyperspectral images (HSI). Existing methods, however, often rely on predefined degradation priors and fail to fully capture the complex spatial-spectral interactions, which limits their performance in practical scenarios. Therefore, this paper proposes an adaptive degradation-aware feature fusion framework to address these limitations.MethodsThe proposed framework consists of three components: a feature fusion network and two degradation-aware networks for spatial and spectral domains (SpaDNet and SpeDNet). Firstly, a spatio-spectral cross-attention mechanism is introduced, which explores the hierarchical correlations between spatial and spectral features to enhance their interaction in the reconstructed images. Secondly, leveraging the physical characteristics of the degradation process, SpaDNet and SpeDNet adaptively learn degradation priors from the input images, enabling efficient blind fusion without predefined degradation information. Lastly, a subspace loss function is designed to decouple spatial domain interference during spectral degradation modeling, enhancing the precision of degradation-aware learning.Results and DiscussionsThe proposed framework is validated on benchmark datasets, including CAVE and Harvard, achieving improvements in peak signal-to-noise ratio (PSNR) by 0.81 dB and 0.76 dB compared to existing methods. In experiments conducted on a hybrid-resolution imaging system, the reconstructed spectral band images demonstrated superior structural texture and signal strength accuracy. These results underscore the proposed method's ability to produce HR-HSI with enhanced spatial-spectral consistency, even in real-world scenarios where degradation priors are unknown.ConclusionsThe proposed adaptive degradation-aware framework introduces a novel approach to blind HSI-MSI fusion, addressing the limitations of existing methods. The integration of spatio-spectral cross-attention and adaptive degradation modeling ensures robust and accurate fusion results. Experimental results on both synthetic and real-world datasets demonstrate the method's effectiveness, offering a significant improvement over state-of-the-art methods.

    Keywords
    Tools

    Get Citation

    Copy Citation Text

    Xuheng CAO, Xiaopeng HAO, Yusheng LIAN, Xuquan WANG, Xinbin CHENG. Degradation-aware transformer for blind hyperspectral and multispectral image fusion(back cover paper·invited)[J]. Infrared and Laser Engineering, 2025, 54(5): 20240592

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Special issue—Hyperspectral technology and applications

    Received: Dec. 19, 2024

    Accepted: --

    Published Online: May. 26, 2025

    The Author Email:

    DOI:10.3788/IRLA20240592

    Topics