Optics and Precision Engineering, Volume. 31, Issue 15, 2273(2023)

Image super-resolution reconstruction based on attention and wide-activated dense residual network

Qiqi KOU1、*, Chao LI2, Deqiang CHENG2, Liangliang CHEN2, Haohui MA2, and Jianying ZHANG2
Author Affiliations
  • 1School of Computer Science and Technology, China University of Mining and Technology, Xuzhou226, China
  • 2School of Information and Control Engineering, China University of Mining and Technology, Xuzhou1116, China
  • show less
    Figures & Tables(14)
    Structure of attention and wide-activated dense residual network
    SKNet attention mechanism
    Comparison of reconstruction effect of different combination methods on BSD100_42049
    Comparison of reconstruction effects of different combinations on Urban100_img002
    PSNR with 4× super-resolution at different channel multiples
    Loss function curves at different magnifications
    Comparison of reconstruction effects by different methods on Set5_baby
    Comparison of reconstruction effects by different methods on Set14_ monarch
    Comparison of reconstruction effects by different methods on BSD100_253027
    Comparison of reconstruction effects by different methods on Urban100_img091
    • Table 1. Effect of number of residual groups (D) and number of residual blocks(R) on performance of 2× super-resolution reconstruction

      View table
      View in Article

      Table 1. Effect of number of residual groups (D) and number of residual blocks(R) on performance of 2× super-resolution reconstruction

      组合

      方式

      PSNR/dB

      参数

      量/M

      Set5Set14BSD100Urban100
      R2D438.06933.56532.18932.0862.68
      R4D238.02933.53832.15332.0692.52
      R4D438.08033.62332.23032.3414.54
      R4D637.89633.32531.94431.3176.55
      R6D438.10033.67632.22732.3206.40
    • Table 2. Effect of convolution kernel size and connection mode of spatial feature transformation layer on performance of 2× super resolution reconstruction

      View table
      View in Article

      Table 2. Effect of convolution kernel size and connection mode of spatial feature transformation layer on performance of 2× super resolution reconstruction

      连接方式卷积核PSNR/dB
      Set5Set14BSD100Urban100
      串联138.07533.61232.22032.283
      串联338.08333.59732.21932.247
      并联138.08033.62332.23032.341
      并联338.08033.61232.22232.239
    • Table 3. Effects of attention mechanisms on 2× super-resolution reconstruction performance

      View table
      View in Article

      Table 3. Effects of attention mechanisms on 2× super-resolution reconstruction performance

      注意力机制的选择PSNR/dB
      Set5Set14BSD100Urban100
      ×38.03833.57232.20232.058
      SE38.04633.54332.16532.020
      CA38.06933.55832.17532.082
      CBAM38.00533.48232.16731.913
      SK38.08033.62332.23032.341
    • Table 4. Reconstruction comparison on baseline dataset at magnifications of 2, 3 and 4

      View table
      View in Article

      Table 4. Reconstruction comparison on baseline dataset at magnifications of 2, 3 and 4

      放大

      倍数

      模 型Set5Set14BSD100Urban100
      PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
      ×2Bicubic536.660.92930.240.86829.560.84326.880.840
      ESPCN837.000.95632.750.91031.510.89429.870.907
      VDSR937.540.95933.030.91231.900.89630.760.914
      DRCN1137.630.95933.060.91231.850.88430.760.913
      LapSRN3037.520.95932.990.91231.800.89530.450.913
      WDSR_Mini2337.960.96033.500.91432.130.89931.790.921
      IMDN1438.000.96133.630.91232.190.90032.170.928
      LatticeNet1538.150.96133.780.91932.250.90132.430.930
      WDRN38.080.96633.590.97032.200.90032.310.938
      ×3Bicubic30.390.86827.550.77427.210.73824.460.735
      ESPCN33.020.91429.490.82728.500.79426.410.816
      VDSR33.650.92129.780.83128.820.79827.140.828
      DRCN33.820.92329.770.83128.800.79627.150.823
      LapSRN33.820.92329.870.83228.830.79827.080.828
      WDSR_Mini34.3750.92530.310.84029.060.80327.970.847
      IMDN34.360.92730.320.84229.090.80528.170.852
      LatticeNet34.530.92830.390.84229.150.80628.330.854
      WDRN34.580.93830.470.91029.170.80628.450.857
      ×4Bicubic28.420.81026.000.70325.960.66823.140.657
      ESPCN30.660.86527.710.75626.980.71224.600.736
      VDSR31.350.88428.010.76727.300.72525.180.751
      DRCN31.530.88528.030.76727.240.72325.140.751
      LapSRN31.540.88628.190.77227.330.72625.210.756
      WDSR_Mini32.170.89328.590.78327.560.73125.950.778
      IMDN32.210.89528.580.78727.560.73526.040.784
      LatticeNet32.300.89628.680.78327.620.73726.250.787
      WDRN32.330.90928.770.85427.660.73826.390.786
    Tools

    Get Citation

    Copy Citation Text

    Qiqi KOU, Chao LI, Deqiang CHENG, Liangliang CHEN, Haohui MA, Jianying ZHANG. Image super-resolution reconstruction based on attention and wide-activated dense residual network[J]. Optics and Precision Engineering, 2023, 31(15): 2273

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Information Sciences

    Received: Nov. 1, 2022

    Accepted: --

    Published Online: Sep. 5, 2023

    The Author Email: KOU Qiqi (kouqiqi@cumt.edu.cn)

    DOI:10.37188/OPE.20233115.2273

    Topics