Opto-Electronic Engineering, Volume. 49, Issue 5, 210382(2022)

Self-similarity enhancement network for image super-resolution

Ronggui Wang... Hui Lei, Juan Yang* and Lixia Xue |Show fewer author(s)
Author Affiliations
  • School of Computer and Information, Hefei University of Technology, Hefei, Anhui 230601, China
  • show less
    Figures & Tables(14)
    Basic architectures.(a) The architecture of our proposed self-similarity enhancement network;(b) The cross-level feature enhancement module; (c) The pooling attention dense blocks
    The proposed feature enhancement module
    Receptive field block
    The proposed Cross-Level Co-Attention architec-ture. "Fgp" denotes the global average pooling
    Schematic illustration of the pooling attention
    Super-resolution results of " Img048" in Urban100 dataset for 4× magnification
    Super-resolution results of " Img092" in Urban100 dataset for 4× magnification
    Super-resolution results of " 223061" in BSD100 dataset for 4× magnification
    Super-resolution results of " 253027" in BSD100 dataset for 4× magnification
    Convergence analysis on CLFE and PADB. The curves for each combination are based on the PSNR on Set5 with scaling factor 4× in 800 epochs.
    Results of each module in the network.(a) The result of first layer convolution; (b) The results of cross-level feature enhancement module;(c) The results of Stacked pooling attention dense blocks
    • Table 1. The average results of PSNR/SSIM with scale factor 2×,3× and 4× on datasets Set5,Set14,BSD100,Urban100 and Manga109

      View table
      View in Article

      Table 1. The average results of PSNR/SSIM with scale factor 2×,3× and 4× on datasets Set5,Set14,BSD100,Urban100 and Manga109

      ScaleMethodSet5Set14BSD100Urban100Manga109
      PSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIM
      Bicubic33.66/0.929930.24/0.868829.56/0.843126.88/0.840930.80/0.9339
      SRCNN[7]36.66/0.954232.45/0.906731.36/0.887929.50/0.894635.60/0.9663
      VDSR[8]37.53/0.959033.05/0.913031.90/0.896030.77/0.914037.22/0.9750
      M2SR[23]38.01/0.960733.72/0.920232.17/0.899732.20/0.929538.71/0.9772
      LapSRN[34]37.52/0.959133.08/0.913031.80/0.895030.41/0.910037.27/0.9740
      PMRN[35]38.13/0.960933.85/0.920432.28/0.901032.59/0.932838.91/0.9775
      OISR-RK2[37]38.12/0.960933.80/0.919332.26/0.900632.48/0.9317
      DBPN[38]38.09/0.960033.85/0.919032.27/0.900032.55/0.932438.89/0.9775
      RDN[36]38.24/0.961434.01/0.921232.34/0.901732.89/0.935339.18/0.9780
      SSEN(ours)38.11/0.960933.92/0.920432.28/0.901132.87/0.935139.06/0.9778
      Bicubic30.39/0.868227.55/0.774227.21/0.738524.46/0.734926.96/0.8546
      SRCNN[7]32.75/0.909029.28/0.820928.41/0.786326.24/0.798930.59/0.9107
      VDSR[8]33.66/0.921329.77/0.831428.82/0.797627.14/0.827932.01/0.9310
      M2SR[23]34.43/0.927530.39/0.844029.11/0.805628.29/0.855133.59/0.9447
      LapSRN[34]33.82/0.922729.79/0.832028.82/0.797327.07/0.827232.19/0.9334
      PMRN[35] OISR-RK2[37]34.57/0.9280 34.55/0.9282 30.43/0.8444 30.46/0.8443 29.19/0.8075 29.18/0.8075 28.51/0.8601 28.50/0.8597 33.85/0.9465 −
      RDN[36]34.71/0.929630.57/0.846829.26/0.809328.80/0.865334.13/0.9484
      SSEN(ours)34.64/0.928930.53/0.846229.20/0.807928.66/0.863534.01/0.9474
      Bicubic28.42/0.810426.00/0.702725.96/0.667523.14/0.657724.89/0.7866
      SRCNN[7]30.48/0.862827.50/0.751326.90/0.710124.52/0.722127.58/0.8555
      VDSR[8]31.35/0.883828.02/0.768027.29/0.726025.18/0.754028.83/0.8870
      M2SR[23]32.23/0.895228.67/0.783727.60/0.737326.19/0.788930.51/0.9093
      LapSRN[34]31.54/0.885028.19/0.772027.32/0.727025.21/0.755129.09/0.8900
      PMRN[35]32.34/0.897128.71/0.785027.66/0.739226.37/0.795030.71/0.9107
      OISR-RK2[37]32.32/0.896528.72/0.784327.66/0.739026.37/0.7953
      DBPN[38]32.47/0.898028.82/0.786027.72/0.740026.38/0.794630.91/0.9137
      RDN[36]32.47/0.899028.81/0.787127.72/0.741926.61/0.802831.00/0.9151
      SSEN(ours)32.42/0.898228.79/0.786427.69/0.740026.49/0.799330.88/0.9132
    • Table 2. The results of cross-level and feature enhancement module and pooling attention dense block with scale factor 4× on Set5

      View table
      View in Article

      Table 2. The results of cross-level and feature enhancement module and pooling attention dense block with scale factor 4× on Set5

      Baseline
      CLFE××
      Cascaded PADB××
      PSNR/dB32.2832.3532.3732.42
      SSIM0.89620.89710.89720.8982
    • Table 3. Model size and MAC comparison on Set14 (2×), "MAC" denotes the number of multiply-accumulate operations

      View table
      View in Article

      Table 3. Model size and MAC comparison on Set14 (2×), "MAC" denotes the number of multiply-accumulate operations

      模型参数计算量PSNR/dBSSIM
      RDN[36]22M5096G34.010.9212
      OISR-RK3[37]42M9657G33.940.9206
      DBPN[38]10M2189G33.850.9190
      EDSR[39]41M9385G33.920.9195
      SSEN15M3436G33.920.9204
    Tools

    Get Citation

    Copy Citation Text

    Ronggui Wang, Hui Lei, Juan Yang, Lixia Xue. Self-similarity enhancement network for image super-resolution[J]. Opto-Electronic Engineering, 2022, 49(5): 210382

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Nov. 26, 2021

    Accepted: --

    Published Online: Jun. 10, 2022

    The Author Email: Yang Juan (yangjuan6985@163.com)

    DOI:10.12086/oee.2022.210382

    Topics