Opto-Electronic Engineering, Volume. 52, Issue 4, 240297(2025)

Remote-sensing images reconstruction based on adaptive dual-domain attention network

Fei Wu1, Jiacheng Chen1, Jun Yang1、*, Wanliang Wang2, and Guoqing Li3
Author Affiliations
  • 1College of Information Science and Engineering, Jiaxing University, Jiaxing, Zhejiang 314000, China
  • 2College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang 310000, China
  • 3College of Information Science and Engineering, Ningbo University, Ningbo, Zhejiang 315000, China
  • show less
    Figures & Tables(19)
    Overall architecture and module structures of ADAN
    Convolution-enhanced spatial-wise self-attention module
    Convolution-enhanced channel-wise self-attention module
    Multi-scale feedforward neural network
    Loss function analysis results on UCMerced LandUse with upscale factor of ×2
    PSNR analysis results on UCMerced LandUse with upscale factor of ×2
    Visual comparison on UCMerced Landuse with an upscaling factor of ×2
    Visual comparison on UCMerced Landuse with an upscaling factor of ×3
    Visual comparison on UCMerced Landuse with an upscaling factor of ×4
    Residual comparison on UCMerced Landuse with an upscaling factor of ×2
    • Table 1. PSNR/SSIM results on the UCMerced LandUse dataset (×2, ×3, and ×4)

      View table
      View in Article

      Table 1. PSNR/SSIM results on the UCMerced LandUse dataset (×2, ×3, and ×4)

      ScalePSNR/SSIM
      BicubicSRCNNFSRCNNVDSRLGCNetDCMHSENetTransENetOurs
      230.76/0.878932.84/0.915233.18/0.919633.38/0.922033.48/0.923533.65/0.927434.22/0.932735.43/0.935535.62/0.9717
      327.46/0.763128.66/0.803829.09/0.816729.28/0.823229.28/0.823829.52/0.834930.00/0.842031.03/0.852631.10/0.8811
      425.65/0.672526.78/0.721926.93/0.726726.85/0.731727.02/0.733327.22/0.752827.73/0.762328.74/0.769428.84/0.8003
    • Table 2. PSNR/SSIM results on the AID dataset (×2, ×3, and ×4)

      View table
      View in Article

      Table 2. PSNR/SSIM results on the AID dataset (×2, ×3, and ×4)

      ScalePSNR/SSIM
      BicubicSRCNNFSRCNNVDSRLGCNetDCMHSENetTransENetOurs
      232.39/0.890634.49/0.928634.73/0.933135.05/0.934634.80/0.932035.21/0.936635.24/0.936835.28/0.937436.93/0.9617
      329.08/0.786330.55/0.837230.98/0.840131.15/0.852230.73/0.841731.31/0.856131.39/0.857231.45/0.859532.96/0.8889
      427.30/0.703628.40/0.756128.77/0.772928.99/0.775328.61/0.762629.17/0.782429.21/0.785029.38/0.790929.99/0.8177
    • Table 3. Average PSNR for each category with an upscaling factor of ×4 on the AID dataset

      View table
      View in Article

      Table 3. Average PSNR for each category with an upscaling factor of ×4 on the AID dataset

      ClassPSNR/dB
      BicubicSRCNNLGCNetVDSRDCMHSENetTransENetOurs
      Airport27.0328.1728.3928.8228.9929.0329.2329.31
      Bareland34.8835.6335.7835.9836.1736.2136.2036.42
      Baseball field29.0630.5130.7531.1831.3631.2331.5931.28
      Beach31.0731.9232.0832.2932.4532.7632.5533.51
      Bridge28.9830.4130.6731.1931.3931.3031.6330.83
      Center25.2626.5926.9227.4827.7227.8428.0327.44
      Church22.1523.4123.6824.1224.2924.3924.5124.62
      Commercial25.8327.0527.2427.6227.7827.9927.9728.39
      Dense residential23.0524.1324.3324.7024.8724.4425.1324.62
      Desert38.4938.8439.0639.1339.2739.3739.3138.99
      Farmland32.3033.4833.7734.2034.4233.9034.5834.19
      Forest27.3928.1528.2028.3628.4738.3128.5628.37
      Industrial24.7526.0026.2426.7226.9226.9927.2127.30
      Meadow32.0632.5732.6532.7732.8832.7432.9433.30
      Medium residential26.0927.3727.6328.0628.2528.1128.4526.94
      Mountain28.0428.9028.9729.1129.1829.2629.2828.89
      Park26.2327.2527.3727.6927.8228.2328.0128.11
      Parking22.3324.0124.4025.2125.7426.1726.4026.01
      Playground27.2728.7229.0429.6229.9231.1830.3032.00
      Pond28.9429.8530.0030.2630.3930.4030.5330.33
      Port24.6925.8226.0226.4326.6226.9226.9127.47
      Railway station26.3127.5527.7628.1928.3828.4728.6128.42
      Resort25.9827.1227.3227.7127.8827.9928.0827.66
      River29.6130.4830.6030.8230.9130.8831.0030.28
      School24.9126.1326.3426.7826.9427.5127.2227.52
      Sparse residential25.4126.1626.2726.4626.5326.4426.4326.58
      Square26.7528.1328.3928.9129.1329.0529.3928.79
      Stadium24.8126.1026.3726.8827.1027.2827.4128.01
      Storage tanks24.1825.2725.4825.8626.0026.0726.2026.80
      Viaduct25.8627.0327.2627.7427.9328.1228.2128.01
      AVG27.3028.4028.6128.9929.1729.2129.3829.99
    • Table 4. LPIPS results on the UCMerced LandUse dataset with scaling factors of ×2, ×3, and ×4

      View table
      View in Article

      Table 4. LPIPS results on the UCMerced LandUse dataset with scaling factors of ×2, ×3, and ×4

      ScaleLPIPS
      BicubicSRCNNFSRCNNVDSRLGCNetDCMHSENetTransENetOurs
      20.07210.04440.04710.02870.02930.02840.02660.02790.0256
      30.12810.09450.10620.08010.07520.06980.06540.06490.0641
      40.16500.12600.13950.11020.10930.10460.10810.10300.1022
    • Table 5. Ablation results of module structures

      View table
      View in Article

      Table 5. Ablation results of module structures

      ModelGISMCESSMCECSMPSNR/dBSSIM
      Model 0×××36.810.9609
      Model 1××36.860.9613
      Model 2×36.900.9615
      Model 3 (ours)36.930.9617
    • Table 6. Ablation results of the multi-scale feedforward neural network (MSFFN)

      View table
      View in Article

      Table 6. Ablation results of the multi-scale feedforward neural network (MSFFN)

      MethodParams/MFLOPs/GPSNR/dB
      3×31.9812636.91
      5×52.0814036.88
      7×72.3515736.90
      MSFFN (ours)2.1314736.93
    • Table 7. Comparison analysis of the multi-scale feedforward neural network (MSFFN) with other representative feedforward neural networks

      View table
      View in Article

      Table 7. Comparison analysis of the multi-scale feedforward neural network (MSFFN) with other representative feedforward neural networks

      MethodParam/MFLOPs/GPSNR/dB
      MLP1.9612036.85
      Conv-FFN2.0213136.88
      GDFN2.0914236.89
      MSFFN (ours)2.1314736.93
    • Table 8. Comparison analysis of ADAN with other representative CNN-Transformer architectures

      View table
      View in Article

      Table 8. Comparison analysis of ADAN with other representative CNN-Transformer architectures

      MethodPSNR/dBSSIM
      TransENet35.430.9355
      Spatial dimension Transformer35.520.9521
      Frequency dimension Transformer35.560.9602
      ADAN(ours)35.620.9719
    • Table 9. Model complexity analysis

      View table
      View in Article

      Table 9. Model complexity analysis

      MethodParam/MFlops/GPSNR/dB
      LGCNet0.1937.1133.48
      DCM2.1807.3233.65
      HSENet5.40010.8034.22
      TransENet37.8009.3235.43
      ADAN(ours)4.1207.1635.62
    Tools

    Get Citation

    Copy Citation Text

    Fei Wu, Jiacheng Chen, Jun Yang, Wanliang Wang, Guoqing Li. Remote-sensing images reconstruction based on adaptive dual-domain attention network[J]. Opto-Electronic Engineering, 2025, 52(4): 240297

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Article

    Received: Dec. 17, 2024

    Accepted: Feb. 17, 2025

    Published Online: Jun. 11, 2025

    The Author Email: Jun Yang (杨俊)

    DOI:10.12086/oee.2025.240297

    Topics