Infrared and Laser Engineering, Volume. 54, Issue 5, 20240569(2025)

Laser interference image inpainting with global semantic perception and texture frequency domain constraints

Peiyao ZHAO1, Bin FENG1, Xinpeng YANG1, Xikui MIAO2, Yunlong WU3, and Qing YE3
Author Affiliations
  • 1School of Automation, Northwestern Polytechnical University, Xi'an 710129, China
  • 263891 Unit of the Chinese People's Liberation Army, Luoyang 471300, China
  • 3College of Electronic Warfare, National University of Defense Technology, Hefei 230037, China
  • show less

    ObjectiveLaser interference is a significant issue in imaging systems, where interference spots can obscure key target information, leading to a considerable degradation in image quality and complicating subsequent image analysis tasks. This is particularly problematic for systems that require high-precision imaging, such as surveillance, military reconnaissance, and remote sensing applications. To address this issue, it is essential to develop a method for effectively restoring laser interference images. The goal of this study is to design an advanced image restoration network that can accurately recover images affected by laser interference. This network will utilize a combination of global semantic perception and texture frequency-domain constraints to enhance the quality of the restored images. The proposed model aims to provide a solution for inpaintinging images where laser interference results in significant distortion, ensuring the restoration of both global context and fine texture details. This research is crucial for improving the performance of imaging systems under laser interference conditions, thereby enabling more reliable and accurate analysis for various practical applications, including defense, environmental monitoring, and remote sensing. The development of such a network is essential for meeting the growing demands of high-precision image inpainting in dynamic and challenging environments.MethodsA laser interference image restoration network is designed in this study. The model consists of two stages: the first stage uses a hybrid network structure that combines self-attention mechanisms and a hierarchical feature extraction module. The network leverages a sliding window self-attention mechanism to gradually expand the receptive field, allowing the model to capture both local and global context information (Fig.2). In the second stage, a contextual attention mechanism is employed to refine the restoration results by analyzing the correlation between the undisturbed and interfered regions of the image (Fig.7). To enhance the restoration of fine texture details, a cosine transform loss function is integrated into the model, optimizing the image in the frequency domain for better preservation of high-frequency components (Fig.9). The performance of the network is evaluated using various image quality metrics, including SSIM and PSNR, to measure the accuracy of the restored images (Tab.1). The network is trained using a large dataset of laser-interfered images and their corresponding ground truth to ensure robust performance under varying interference conditions (Fig.11).Results and DiscussionsThe comparison results in Tab.4 reveal significant differences in the structural similarity (SSIM) between different algorithms in both the interfered and non-interfered regions. Our proposed method outperforms the others in both areas, achieving SSIM values of 0.969 for the interfered region and 0.997 for the non-interfered region, demonstrating its exceptional capability in restoring complex texture details in the interfered regions while maintaining the image quality in the non-interfered regions. In contrast, the Pconv method achieves an SSIM of 0.957 in the laser-interfered region, which is slightly inferior to our approach, indicating that it struggles with fine texture restoration. The SSIM for the non-interfered region is 0.992, which is close to the ideal level. The CoordFill and PatchMatch methods show significantly lower performance, with SSIM values of 0.711 and 0.687 for the interfered region, respectively. These results indicate that both methods are less effective in restoring laser interference, especially in terms of recovering detailed textures. In the non-interfered regions, their SSIM values of 0.982 and 0.980 are also lower than those of Pconv and our method, further highlighting their limitations in maintaining overall image quality. These results demonstrate the superior performance of our proposed algorithm in laser interference image inpainting, both in terms of detail preservation and global image consistency.ConclusionsA laser interference image restoration network is proposed in this study, which effectively addresses the challenges of restoring images affected by laser interference. The network is composed of a two-stage structure: the first stage utilizes a hybrid module combining self-attention mechanisms and hierarchical feature extraction, while the second stage employs a contextual attention mechanism to refine the restoration results. The network incorporates a cosine transform loss function to enhance texture detail recovery in the interfered regions. Experimental results demonstrate that the proposed method outperforms existing algorithms in terms of both structural similarity and texture detail restoration, achieving superior SSIM values of 0.969 in the interfered region and 0.997 in the non-interfered region. Additionally, the model excels in maintaining overall image quality and effectively restoring complex textures in regions affected by laser interference. This approach offers a promising solution for improving the performance of imaging systems in environments where laser interference is a concern, enabling more reliable image restoration for a wide range of practical applications. The robust performance under varying conditions demonstrates its potential for real-world deployment in applications such as surveillance, military reconnaissance, and remote sensing

    Keywords
    Tools

    Get Citation

    Copy Citation Text

    Peiyao ZHAO, Bin FENG, Xinpeng YANG, Xikui MIAO, Yunlong WU, Qing YE. Laser interference image inpainting with global semantic perception and texture frequency domain constraints[J]. Infrared and Laser Engineering, 2025, 54(5): 20240569

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Optical imaging, display and information processing

    Received: Jan. 12, 2025

    Accepted: --

    Published Online: May. 26, 2025

    The Author Email:

    DOI:10.3788/IRLA20240569

    Topics