Infrared and Laser Engineering, Volume. 53, Issue 1, 20230472(2024)

Image target detection algorithm based on YOLOv7-tiny in complex background

Shan Xue1,2, Hongyu An1, Qiongying Lv1, and Guohua Cao2
Author Affiliations
  • 1College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
  • 2Chongqing Research Institute, Changchun University of Science and Technology, Chongqing 400000, China
  • show less

    ObjectiveOnce the "black flying" drone carries items such as bombs, it can pose a threat to people. Target detection of "black flying" drones in complex backgrounds such as parks, amusement parks, and schools is the key to anti-drone systems in public areas. This paper aims to detect small-scale targets in complex background. Because the traditional manual image feature extraction methods are not targeted, time complexity is high, windows are redundant, the detection effect is poor, and the average accuracy is low. The problems of false detection and missing detection will occur when detecting small-scale UAVs in complex background. Therefore, this paper aims to develop a black flying UAV detection model based on deep learning, which is essential for the detection of unmanned aerial vehicles. MethodsYOLOv7 is a stage target detection algorithm without anchor frame, with high detection accuracy and good inference speed. YOLOv7-tiny belongs to the grain grabbing memory model, with fewer parameters and fast operation, making it widely used in industry. In the backbone network, the built multi-scale channel attention module SMSE (Fig.5) is introduced to enhance the attention of UAVs in complex backgrounds. Between the backbone network and the feature fusion layer, the RFB feature extraction module (Fig.6) is introduced to increase the Receptive field and expand the feature information extraction. In the feature fusion, the small target detection layer is added to improve the detection ability of small UAV targets. In terms of calculating losses, the introduction of SIoU Loss function redefines the penalty index, which significantly improves the speed of training and the accuracy of reasoning. Finally, the ordinary convolution is replaced by the deformable convolution (Fig.7), making the detection closer to the shape and size of the object. Results and Discussions The dataset selected in this article is a combination of the self-made dataset (Fig.1) and the Dalian University of Technology drone dataset (Fig.2). The mainly used evaluation indicators are mAP (mean accuracy) and FPS (detection speed), Params (parameter quantity) and GFLOPS (computational quantity) as secondary indicators. Each module was compared with the original algorithm, including attention comparison experiment (Tab.1), RFB module comparison experiment (Tab.2), small target detection layer comparison experiment (Tab.3), Loss function comparison experiment (Tab.4), and deformable convolution comparison experiment (Tab.5). And ablation experiments were conducted (Tab.6), which confirmed the effectiveness and feasibility of the proposed algorithm through mAP comparison, improving accuracy by 6.1%. On this basis, the detection performance of different algorithms was compared (Tab.7), and the generalization of the algorithm was verified on the VOC public dataset (Tab.8). ConclusionsThis article proposes an improved object detection algorithm for anti-drone systems. Through the multi-scale channel attention module, the attention of small targets is enhanced, the fusion RFB increases the Receptive field, adds a small target detection layer to improve the detection ability, and improves the Loss function to improve the training speed and reasoning accuracy. Finally, deformable convolution is introduced to better fit the target size. The improved algorithm has achieved good detection results on different datasets.

    Tools

    Get Citation

    Copy Citation Text

    Shan Xue, Hongyu An, Qiongying Lv, Guohua Cao. Image target detection algorithm based on YOLOv7-tiny in complex background[J]. Infrared and Laser Engineering, 2024, 53(1): 20230472

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Jul. 30, 2023

    Accepted: --

    Published Online: Mar. 19, 2024

    The Author Email:

    DOI:10.3788/IRLA20230472

    Topics