Optics and Precision Engineering, Volume. 33, Issue 5, 789(2025)

End-to-end recognition of nighttime wildlife based on semi-supervised learning

Han LU1, Bolun CUI2, Huayang WAN1, Guofeng ZHANG1, Chen SHEN1, and Chi WANG1、*
Author Affiliations
  • 1School of Mechatronic Engineering and Automation, Shanghai University, Shanghai200444, China
  • 2Beijing Institute of Space Mechanics & Electricity, Beijing100094, China
  • show less
    Figures & Tables(13)
    Structure of feature fusion attention module
    Structure of the detector AN-YOLO
    Semi-supervised learning model SAN-YOLO
    Position of attention module in YOLO backbone
    Comparison of detection accuracy of different models
    Detection accuracy of each species with 5% annotation
    Role of semi-supervised training framework in model training
    Analysis of difficult samples in SAN-YOLO test
    • Table 1. Distribution of labeled data in datasets

      View table
      View in Article

      Table 1. Distribution of labeled data in datasets

      No.Label nameSource 1Source 2Total
      1Armadillo3 30503 305
      2Bird3 21503 215
      3Bobcat03 1043 104
      4Cat02 6352 635
      5Coyote03 3573 357
      6Dasyprocta_punctata3 62803 628
      7Deer5093 5034 012
      8Dog0712712
      9Fox63852915
      10Giant_anteater4850485
      11Leopardus3140314
      12Mustelidae591 0581 117
      13Opossum5 0496 75711 806
      14Paca3 83303 833
      15Peccary7 24907 249
      16Rabbit03 2663 266
      17Raccoon1 0383 8024 840
      18Squirrel53129182
      19Tamandua1 12001 120
      20Tapirus1 38301 383
      Sum31 30329 17560 478
    • Table 2. Detection performance for application of different enhancement modules to YOLOv8

      View table
      View in Article

      Table 2. Detection performance for application of different enhancement modules to YOLOv8

      MethodAdditional modulesExtra time/msmAP50mAP50∶95
      YOLOv8-nNone00.5710.352
      Two-stage enhancementZero-DCE160.580.5660.351
      Zero-DCE++62.730.5710.356
      RetinexNet634.610.5950.414
      End-to-end enhancementCPA-Enhancer262.610.5610.365
      PENet166.060.5600.345
      Attention mechanismChannel attention (backbone)5.770.5830.367
      Channel attention (neck)5.460.5780.357
      Spatial attention (backbone)4.060.5910.370
      Spatial attention (neck)4.100.5860.361
      CBAM(backbone)8.280.5720.361
      CBAM(neck)8.680.5820.362
      FFA(backbone)31.720.5740.360
      FFA(neck)32.160.5960.379
    • Table 3. Performance comparison of semi-supervised and fully supervised learning

      View table
      View in Article

      Table 3. Performance comparison of semi-supervised and fully supervised learning

      Label ratioBurn-inSupervisedSemi-supervised
      mAP50mAP50:95mAP50mAP50:95mAP50mAP50∶95
      5%0.4840.2780.5960.3790.6970.509
      10%0.6730.4440.7490.5200.7980.604
    • Table 4. Performance comparison of semi-supervised and fully supervised learning

      View table
      View in Article

      Table 4. Performance comparison of semi-supervised and fully supervised learning

      AttentionSemi-supervisionmAP50mAP50:95
      0.5710.352
      0.596(↑4.38%)0.379(↑7.67%)
      0.687(↑20.32%)0.490(↑39.20%)
      0.697(↑22.07%)0.509(↑44.60%)
    • Table 5. Performance comparison of semi-supervised and fully supervised learning methods

      View table
      View in Article

      Table 5. Performance comparison of semi-supervised and fully supervised learning methods

      MethodFLOPs/GmAP50mAP50∶95
      YOLOv3-tiny18.90.5530.344
      YOLOv5-n7.20.5540.368
      YOLOv8-s28.70.6510.485
      SSD6.80.5360.311
      RT-DETR-r1857.30.6100.464
      Soft-teacher202.30.6740.466
      Efficient-teacher5.720.6620.450
      Ours5.900.6970.509
    Tools

    Get Citation

    Copy Citation Text

    Han LU, Bolun CUI, Huayang WAN, Guofeng ZHANG, Chen SHEN, Chi WANG. End-to-end recognition of nighttime wildlife based on semi-supervised learning[J]. Optics and Precision Engineering, 2025, 33(5): 789

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Dec. 24, 2024

    Accepted: --

    Published Online: May. 20, 2025

    The Author Email:

    DOI:10.37188/OPE.20253305.0789

    Topics