Acta Optica Sinica, Volume. 43, Issue 12, 1212001(2023)

Lightweight Ship Detection Based on Optical Remote Sensing Images for Embedded Platform

Huiying Wang1, Chunping Wang1, Qiang Fu1、*, Zishuo Han2, and Dongdong Zhang1
Author Affiliations
  • 1Department of Electronic and Optical Engineering, Shijiazhuang Campus of Army Engineering University, Shijiazhuang 050003, Hebei, China
  • 232356 Troops of the Chinese People's Liberation Army, Xining 710003, Qinghai, China
  • show less

    Objective

    Ship detection plays an important role in military and civilian fields such as defense security, dynamic port monitoring, and maritime traffic management. With the rapid development of space remote sensing technologies, the number of high-resolution optical remote sensing images is increasing exponentially, which lays the data foundation for research on ship detection techniques. Meanwhile, it is required that detection systems should have real-time accuracy to match the growth rate of the number of remote sensing images. Traditional methods for object detection are mainly accomplished by the construction of mathematical models or the use of object saliency. However, most of these algorithms rely on the prior knowledge of experts and have certain limitations, which cannot cope with the complex and variable background and the multimodal and heterodyne objects. Recent years have seen the rapid development of deep learning technology. The object detection method based on convolutional neural networks (CNNs) is widely used because of its strong learning ability and high detection accuracy. Currently, mainstream object detection models based on deep learning are mainly divided into two categories, i.e., two-stage networks and single-stage networks. In general, two-stage network detection has high accuracy but is difficult to deploy on embedded devices due to a large amount of computation and huge time consumption. The YOLO series, single-stage network detection algorithms, have received extensive attention and applications due to their simple network structure and consideration of both detection accuracy and detection speed. However, due to the poor computing power and limited memory resources of embedded devices, it is difficult to directly apply single-stage detection models to embedded devices to detect objects in real time. Hence, we expect to deploy a high-performance model to detect ships in optical remote sensing images on equipment terminals with limited resources and space and achieve a lightweight ship detection network for complex remote sensing scene images to promote the landing of the model.

    Methods

    As the existing lightweight object detection algorithms based on deep learning have low detection accuracy and slow detection speed for ships in complex remote sensing scene images, a lightweight real-time ship detection algorithm STYOLO is proposed for embedded platforms. The algorithm uses YOLOv5s as the basic framework. First of all, considering the high memory access costs in the backbone network, the efficient network architecture ShuffleNet v2 is used as the backbone network to extract image features, which reduces memory access costs and improves network parallelism. Secondly, the Slim-neck feature fusion structure is used as the feature enhancement network to fuse the detailed information in the lower-level feature maps to enhance the feature response to small objects. In addition, the coordinate attention mechanism is applied in the multi-scale information fusion region to strengthen object attention and thus improve the ability to detect difficult samples and resist background interference. Finally, a learning strategy combining cross-domain and in-domain transfers is proposed to reduce the difference between source and target domains and improve the transfer learning effect.

    Results and Discussions

    After 100 training iterations with ShuffleNetv2-YOLOv5s, YOLOv5s, MobileNetv3-YOLOv5s, and YOLOv5n on the same test and validation sets, all the evaluated metrics have good performance (Fig. 11), which verifies the effectiveness of the proposed algorithm. On the basis of the YOLOv5s framework, ShuffleNet v2 is used as the backbone network, and Slim-neck is used as the feature enhancement network; the two detection models are trained by cross-domain transfer learning. Compared to the YOLOv5s model, the lightweight model has reduced the detection accuracy, the number of floating points, and the number of parameters by 2.12 percentage points, 62.02%, and 62.05% (Table 2), respectively. To improve the detection accuracy of difficult samples and the ability to counter background interference, we employ the coordinate attention mechanism at the intersection of different information scales in the feature enhancement network. Compared with the results of the detection model without the coordinate attention mechanism, the mAP of the proposed algorithm is improved by 4.94 percentage points, and the number of parameters is raised by 0.75% (Table 3). When different attention mechanisms are applied at the intersection of different information scales in the feature enhancement network, it is found that the model applied with the coordinate attention mechanism has the highest mAP of 90.46% at a shrinkage rate of 32, an increase of 4.94 percentage points (Table 4). A learning strategy that combines the cross-domain transfer with the in-domain transfer is proposed to reduce the discrepancy between source and target domains and improve transfer learning. The mAP of the proposed algorithm with the above learning strategy is 94.33%, which is 3.87 and 14.17 percentage points higher than that with the training methods of cross-domain transfer learning and in-domain transfer learning, respectively (Table 5). The proposed algorithm is compared with ShuffleNetv2-YOLOv5s, YOLOv5s, MobileNetv3-YOLOv5s, and YOLOv5n on desktop computers and the Jetson Nano terminal. The proposed algorithm achieves a good trade-off between detection speed and detection accuracy in the optical remote sensing ship detection task, and the overall performance is good (Table 6 and Fig. 12).

    Conclusions

    To address the problem that existing lightweight object detection algorithms cannot achieve real-time accurate detection of ships in complex remote sensing scenes, we propose a lightweight real-time algorithm to detect ships in optical remote sensing images for embedded platforms, called STYOLO. Compared to current mainstream detection algorithms used in embedded systems, STYOLO can effectively improve detection speed while ensuring high accuracy. On the Jetson Nano terminal, it has a detection speed of 102.8 frame/s, which is approximately 2.21 times faster than YOLOv5s, 1.36 times faster than ShuffleNetv2-YOLOv5s, 1.70 times faster than MobileNetv3-YOLOv5s, and 1.50 times faster than YOLOv5n. The precision reaches 94.33%, 2.7 percentage points higher than YOLOv5s, 4.19 percentage points higher than ShuffleNetv2-YOLOv5s, 7.27 percentage points higher than MobileNetv3-YOLOv5s, and 24.61 percentage points higher than YOLOv5n, which can meet the requirements of accurate and real-time detection of ships in optical remote sensing images on embedded platforms. In the detection tasks of ships in remote sensing images, visible images are vulnerable to the natural environment, which leads to the weakening of the target features and difficulty in improving the accuracy of the algorithm. Hence, improving the accuracy of weak object detection by combining infrared and visible images for fusion detection is a key research direction.

    Tools

    Get Citation

    Copy Citation Text

    Huiying Wang, Chunping Wang, Qiang Fu, Zishuo Han, Dongdong Zhang. Lightweight Ship Detection Based on Optical Remote Sensing Images for Embedded Platform[J]. Acta Optica Sinica, 2023, 43(12): 1212001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Instrumentation, Measurement and Metrology

    Received: Sep. 7, 2022

    Accepted: Oct. 27, 2022

    Published Online: Apr. 25, 2023

    The Author Email: Fu Qiang (1245316750@qq.com)

    DOI:10.3788/AOS221689

    Topics