Spacecraft Recovery & Remote Sensing, Volume. 45, Issue 3, 82(2024)
Text-Semantics-Driven Feature Extraction from Remote Sensing Imagery
With the rapid development of remote sensing technology, high-precision remote sensing image feature extraction has become increasingly crucial in fields such as geographic information science, urban planning, and environmental monitoring. However, traditional image-based remote sensing image feature extraction methods often have limited accuracy when dealing with complex and variable surface features, making it difficult to meet diverse application needs. To address this issue, this paper proposes a novel multimodal remote sensing image semantic segmentation framework (MMRSSEG) that integrates both visual and textual information using deep learning techniques to achieve high-precision analysis of remote sensing images. We conducted a series of experiments on a remote sensing image dataset of buildings, and the results show that MMRSSEG significantly improves the accuracy of pixel-level remote sensing image feature extraction compared to traditional image segmentation methods. In the building recognition task, our method outperformed traditional unimodal algorithms. These experimental results fully demonstrate the effectiveness and prospects of integrating multimodal textual information in remote sensing image segmentation.
Get Citation
Copy Citation Text
Sijun DONG, Xiaoliang MENG. Text-Semantics-Driven Feature Extraction from Remote Sensing Imagery[J]. Spacecraft Recovery & Remote Sensing, 2024, 45(3): 82
Category:
Received: Nov. 1, 2023
Accepted: --
Published Online: Oct. 30, 2024
The Author Email: MENG Xiaoliang (xmeng@whu.edu.cn)