Journal of Applied Optics, Volume. 45, Issue 3, 616(2024)

Low-light pedestrian detection and tracking algorithm based on autoencoder structure and improved Bytetrack

Zelin REN... Lan PANG, Chao WANG, Jiaheng LI and Fangyan ZHOU* |Show fewer author(s)
Author Affiliations
  • Xi'an Institute of Applied Optics, Xi'an 710065, China
  • show less
    Figures & Tables(18)
    Flow chart of overall research framework
    Model framework of multi-task autoencoding transformations (MAET)
    Flow chart of low-illumination degradation
    Structure diagram of target detection decoder with ASFF
    Transformer-based pedestrian re-identification network
    Overall framework diagram of tracking network
    Comparison of detection results of different algorithms
    Visualization results of attention map
    Comparison of multi-target tracking effects
    Multi-target tracking effects a
    Multi-target tracking effects b
    Multi-target tracking effects c
    • Table 1. Multi-target tracking evaluation indicators

      View table
      View in Article

      Table 1. Multi-target tracking evaluation indicators

      评价指标定义
      HOTA高阶跟踪精度
      MOTA多目标跟踪准确率
      IDF1正确识别的检测与平均真实数和计算检测数之比
      IDsID切换总次数
      FP误检目标总数
      FN漏检目标总数
      FPS帧率
    • Table 2. Comparative experimental results of target detection algorithms

      View table
      View in Article

      Table 2. Comparative experimental results of target detection algorithms

      算法训练集测试集图像 预处理mAP @0.5:0.95mAP @0.5
      YOLOXnormallow0.1960.371
      YOLOXnormallowZero DCE0.1650.325
      YOLOXnormallowKind0.1600.319
      YOLOXnormallowURetinexNet0.2010.380
      YOLOXlowlow0.2990.635
      YOLOX- ASFFlowlow0.3250.664
      Faster- RCNNlowlow0.2820.602
      Centernetlowlow0.2840.599
      RTMDetlowlow0.3840.701
      本文算法normal+lowlow0.3620.682
    • Table 3. Generalization experimental results of target detection algorithms

      View table
      View in Article

      Table 3. Generalization experimental results of target detection algorithms

      算法训练集图像预 处理mAP @0.5mAP @0.75
      Faster RCNNlow0.8920.690
      normalZero DCE0.8960.683
      normalKind0.8600.683
      normalURetinexNet0.8810.694
      Centernetlow0.8820.751
      normalZero DCE0.8790.744
      normalKind0.8640.732
      normalURetinexNet0.8640.760
      RTMDetlow0.8990.780
      normalZero DCE0.8950.760
      normalKind0.8060.681
      normalURetinexNet0.8110.686
      YOLOX-ASFFlow0.9450.760
      normalZero DCE0.9340.731
      normalKind0.8850.720
      normalURetinexNet0.9050.746
      本文算法normal+low0.9490.795
      YOLOX(基准)normal0.9290.763
    • Table 4. Ablation experimental results of appearance feature extraction network

      View table
      View in Article

      Table 4. Ablation experimental results of appearance feature extraction network

      外观特征提取网络HOTA/%MOTA/%IDF1/%
      osnet70.6588.1285.31
      transreid72.3689.5588.34
    • Table 5. Ablation experimental results of improved components

      View table
      View in Article

      Table 5. Ablation experimental results of improved components

      AWNSAHOTA/%MOTA/%IDF1/%
      70.7987.8186.27
      71.2889.3888.30
      72.3689.5588.34
    • Table 6. Comparison of multi-target tracking results

      View table
      View in Article

      Table 6. Comparison of multi-target tracking results

      算法HOTA/%MOTA/%IDF1/%IDsFPFNFPS
      YOLOX-Bytetrack66.1978.8382.1530586141233.93
      MAET_YOLOX-Bytetrack69.1384.3684.842547799429.32
      YOLOX-BytetrackAW67.1483.0282.9918346124916.37
      MAET_YOLOX-BytetrackAW72.3689.5588.34153266519.98
    Tools

    Get Citation

    Copy Citation Text

    Zelin REN, Lan PANG, Chao WANG, Jiaheng LI, Fangyan ZHOU. Low-light pedestrian detection and tracking algorithm based on autoencoder structure and improved Bytetrack[J]. Journal of Applied Optics, 2024, 45(3): 616

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Research Articles

    Received: Oct. 27, 2023

    Accepted: --

    Published Online: Jun. 2, 2024

    The Author Email: ZHOU Fangyan (周方琰(1998—))

    DOI:10.5768/JAO202445.0302001

    Topics