Opto-Electronic Engineering, Volume. 51, Issue 3, 230317-1(2024)

Image-guided and point cloud space-constrained method for detection and localization of abandoned objects on the road

Huaiyu Cai1...2, Zhaoqian Yang1,2, Ziyang Cui1,2, Yi Wang1,2, and Xiaodong Chen12,* |Show fewer author(s)
Author Affiliations
  • 1School of Precision Instrument and Opto-electronics Engineering, Tianjin University, Tianjin 300072, China
  • 2Key Laboratory of Optoelectronic Information Technology Ministry of Education, Tianjin University, Tianjin 300072, China
  • show less
    Figures & Tables(17)
    The overall framework of the proposed method
    The network architecture of YOLOv7-OD
    The network structure of SOD Layer
    SDK Attention module
    Generation of ROI areas in the LiDAR coordinate system based on image object detection bounding boxes
    Structural diagram of the experimental device
    Filtering results. (a) Original point cloud data; (b) Effective point cloud data obtained by field-of-view matching; (c) Non-ground point cloud after CSF filtering; (d) Ground point cloud after CSF filtering
    The specific meanings of various evaluation metrics in space
    Detection and localization results for abandoned objects on the road (Scene one, selected partial area). (a) Image of the scene to be tested; (b) Detection and localization results by method A; (c) Detection and localization results by our method
    Detection and localization results for abandoned objects on the road (Scene two, selected partial area). (a) Image of the scene to be tested; (b) Detection and localization results by method A; (c) Detection and localization results by our method
    Experimental results for detecting and locating abandoned objects on the road. (a) Scene one; (b) Scene two
    • Table 1. Ablation experiments on the WOD dataset

      View table
      View in Article

      Table 1. Ablation experiments on the WOD dataset

      YOLOv7SOD LayerSDK AttentionAP/%AP/%mAP0.5 /%mAP0.5:0.95 /%
      SmallMediumLargeSmallMediumLarge
      10.0035.8070.4023.2047.2075.6057.1532.30
      11.9038.9066.9026.0049.7073.1059.2834.18
      11.2037.6066.8024.3048.7073.0058.1233.21
      12.0039.0067.7026.2050.8073.2059.4634.33
    • Table 2. Ablation experiments on custom dataset

      View table
      View in Article

      Table 2. Ablation experiments on custom dataset

      YOLOv7SOD LayerSDK AttentionAP/%AP/%mAP0.5 /%mAP0.5:0.95 /%
      SmallMediumLargeSmallMediumLarge
      52.0074.0085.3061.5078.8089.0094.2064.80
      55.0079.8092.7065.4084.0095.5094.3069.80
      54.1081.5093.4063.2085.3095.1093.8070.00
      57.3082.0092.0066.8085.4093.9095.3071.90
    • Table 3. Additional evaluation metrics for different network models

      View table
      View in Article

      Table 3. Additional evaluation metrics for different network models

      YOLOv7SOD LayerSDK AttentionParams/MBGFLOPsFPS
      71.3103.382.2
      51.4108.273.2
      73.7118.375.4
      52123.365.8
    • Table 4. Comparative experiments of YOLOv7-OD with other object detection algorithms

      View table
      View in Article

      Table 4. Comparative experiments of YOLOv7-OD with other object detection algorithms

      ModelParams /MBAP/%AP/%mAP0.5 /%mAP0.5:0.95 /%
      SmallMediumLargeSmallMediumLarge
      Faster-RCNN[2]79.03.70%19.0%41.7%9.00%26.2%46.6%29.7%15.6%
      RetinaNet[7]61.73.00%29.8%64.7%10.6%40.4%71.4%43.9%24.7%
      YOLOX[22]1047.40%31.3%68.1%13.9%39.2%72.8%48.1%28.0%
      DETR[3]79.04.83%26.4%62.2%11.3%35.7%68.7%45.4%24.0%
      YOLOv3[5]1193.00%26.2%64.1%5.80%35.0%69.8%41.9%22.9%
      YOLOv5 [23]88.56.60%31.4%64.1%13.3%45.5%71.1%48.6%27.2%
      YOLOv6[6]72.47.50%37.5%71.7%16.7%46.4%78.0%51.9%31.0%
      YOLOv7[18]71.310.0%35.8%70.4%23.2%47.2%75.6%57.2%32.3%
      YOLOv8[24]83.68.30%38.9%71.3%17.7%46.7%76.5%53.0%32.0%
      YOLOv7-OD52.112.0%39.0%67.7%26.2%50.8%73.2%59.5%34.3%
    • Table 5. Experimental results of two point cloud localization methods

      View table
      View in Article

      Table 5. Experimental results of two point cloud localization methods

      MethodN (road objects)N (abandoned object)N (predicted)N (true)N (false)Precision/%Recall/%
      Method A920270150147398.0055.56
      Ours920270258250896.9095.56
    • Table 6. Error in abandoned objects localization using point cloud generation method at different distances

      View table
      View in Article

      Table 6. Error in abandoned objects localization using point cloud generation method at different distances

      Distance/mMAE-error
      ΔD/mΔW/mΔθ/(°)
      0~200.1320.00990.195
      20~300.1560.01210.115
      30~400.1880.01620.0819
      Over 400.2230.02710.0541
      Total0.1810.02180.122
    Tools

    Get Citation

    Copy Citation Text

    Huaiyu Cai, Zhaoqian Yang, Ziyang Cui, Yi Wang, Xiaodong Chen. Image-guided and point cloud space-constrained method for detection and localization of abandoned objects on the road[J]. Opto-Electronic Engineering, 2024, 51(3): 230317-1

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Article

    Received: Dec. 27, 2023

    Accepted: Mar. 1, 2024

    Published Online: Jul. 8, 2024

    The Author Email: Chen Xiaodong (陈晓冬)

    DOI:10.12086/oee.2024.230317

    Topics