Journal of Terahertz Science and Electronic Information Technology , Volume. 21, Issue 3, 360(2023)

Edge aware guidance saliency detection based on multi-modal remote sensing image

LIAN Yuanfeng1,2、*, SHI Xu1, and JIANG Cheng3
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • 3[in Chinese]
  • show less

    To address the problems of poor robustness and poor detection accuracy of multimodal remote sensing image saliency detection, this paper proposes a method based on the novel and efficient Multi-modal Edge aware Guidance Network(MEGNet), which mainly consists of a salient detection backbone network for multi-modal remote sensing images, a cross-modal feature sharing module and an edge aware guidance network. First of all, a Cross-modal Feature Sharing Module(CFSM) is used during feature extraction for remote sensing image pairs, which encourages different modalities to complement each other in the feature extraction process and suppresses the influence of defective feature data from different modalities. Secondly, based on the Edge Aware Guidance Network(EAGN), the effectiveness of edge features is detected through the edge map supervision module and the final salient detection map will have clear boundaries. Finally, experiments are carried out on three kinds of saliency objects detection remote sensing image datasets. The average Fβ, Mean Absolute Error(MAE) and Sm scores are 0.917 6, 0.009 5 and 0.919 9, respectively. The experimental results show that the proposed MEGNet is suitable for saliency detection in multi-modal scenes.

    Tools

    Get Citation

    Copy Citation Text

    LIAN Yuanfeng, SHI Xu, JIANG Cheng. Edge aware guidance saliency detection based on multi-modal remote sensing image[J]. Journal of Terahertz Science and Electronic Information Technology , 2023, 21(3): 360

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Nov. 1, 2022

    Accepted: --

    Published Online: Apr. 12, 2023

    The Author Email: Yuanfeng LIAN (lianyuanfeng@cup.edu.cn)

    DOI:10.11805/tkyda2022216

    Topics