Journal of Optoelectronics · Laser, Volume. 36, Issue 4, 391(2025)

Image super-resolution reconstruction based on AMRMA model

ZHONG Hui1 and ZHU Zhengwei1,2、*
Author Affiliations
  • 1School of Information Engineering, Southwest University of Science and Technology, Mianyang, Sichuan 621010, China
  • 2Robot Technology Used For Special Enviroment Key Laboratory of Sichuan Province, Mianyang, Sichuan 621010, China
  • show less

    Existing CNN (convolutional neural network)-based image super resolution reconstruction methods are usually realized on full-resolution or progressively low-resolution image representations.The former can achieve the spatially accurate but contextually weak super-resolution reconstruction result,while the latter can obtain the semantically reliable but less spatially accurate output.To solve the above-mentioned problems,a new super-resolution reconstruction model and method based on across-multi-resolution information flow and multiple attention mechanism (AMRMA) is proposed in this paper.Multi-scale feature extraction and aggregation are realized by using cross-multi-resolution information flow and information interaction mechanism.Multiple attention mechanism is used for capturing context information to enhance image high-frequency information.A new weighted loss function is designed to optimize the model parameters.The experimental results on five public datasets show that,compared with classic and existing methods,such as Bicubic,SRCNN,VDSR,RDN and MuRNet,the peak signal-to-noise ratio (PNSR) and structural similarity (SSIM) of the proposed method are improved by 0.33 dB and 0.004 8,and the proposed method has better super-resolution reconstruction effect.

    Tools

    Get Citation

    Copy Citation Text

    ZHONG Hui, ZHU Zhengwei. Image super-resolution reconstruction based on AMRMA model[J]. Journal of Optoelectronics · Laser, 2025, 36(4): 391

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Received: Oct. 30, 2023

    Accepted: Mar. 21, 2025

    Published Online: Mar. 21, 2025

    The Author Email: ZHU Zhengwei (zhuzwin@163.com)

    DOI:10.16136/j.joel.2025.04.0564

    Topics