Chinese Journal of Ship Research, Volume. 20, Issue 2, 140(2025)

Operation standardization evaluation method based on improved YOLOv8n for ship equipment disassembly and assembly

Zhendong ZHANG1, Cong GUAN1, Zehui ZHANG2, Chao WU1, and Xuewen DING1
Author Affiliations
  • 1School of Naval Architecture, Ocean and Energy Power Engineering, Wuhan University of Technology, Wuhan 430063, China
  • 2School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University,Hangzhou 310018, China
  • show less
    References(23)

    [10] [10] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies f accurate object detection semantic segmentation[C]Proceedings of 2014 IEEE Conference on Computer Vision Pattern Recognition. Columbus: IEEE, 2014: 580−587. doi: 10.1109CVPR.2014.81.

    [12] [12] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, realtime object detection[C]Proceedings of 2016 IEEE Conference on Computer Vision Pattern Recognition. Las Vegas: IEEE, 2016: 779−788. doi: 10.1109CVPR.2016.91.

    [13] [13] REDMON J, FARHADI A. YOLOv3: an incremental improvement[R]. Washington: University of Washington, 2018.

    [14] [14] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]Proceedings of 2017 IEEE Conference on Computer Vision Pattern Recognition. Honolulu: IEEE, 2017: 6517−6525. doi: 10.1109CVPR.2017.690.

    [15] [15] GE Z, LIU S T, WANG F, et al. YOLOX: exceeding YOLO series in 2021[JOL]. arXiv: 2107.08430. https:arxiv.gabs2107.08430.

    [16] [16] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detect[C]Proceedings of the 14th European Conference on Computer VisionECCV 2016. Amsterdam: Springer, 2016: 21−37. doi: 10.10079783319464480_2.

    [17] [17] DUAN K W, BAI S, XIE L X, et al. Center: keypoint triplets f object detection[C]Proceedings of 2019 IEEECVF International Conference on Computer Vision. Seoul, Kea (South): IEEE, 2019: 6568−6577. doi: 10.1109ICCV.2019.00667.

    [24] [24] JOCHER C. Ultralytics YOLO[EBOL]. (20230110)[20240426]. https:github.comultralyticsultralytics.

    [25] [25] ZHANG Q L, YANG Y B. SA: shuffle attention f deep convolutional neural wks[C]Proceedings of 2021 IEEE International Conference on Acoustics, Speech Signal Processing. Tonto: IEEE, 2021: 22352239. doi: 10.1109ICASSP39728.2021.9414568.

    [26] [26] LIU S, QI L, QIN H F, et al. Path aggregation wk f instance segmentation[C]Proceedings of 2018 IEEECVF Conference on Computer Vision Pattern Recognition. Salt Lake City: IEEE, 2018: 8759−8768. doi: 10.1109CVPR.2018.00913.

    [27] [27] JIANG Y Q, TAN Z Y, WANG J Y, et al. GiraffeDet: a heavyneck paradigm f object detection[JOL]. arXiv: 2202.04256. https:arxiv.gabs2202.04256.

    [29] [29] LI X, WANG W H, WU L J, et al. Generalized focal loss: learning qualified distributed bounding boxes f dense object detection[C]Proceedings of the 34th International Conference on Neural Infmation Processing Systems. Vancouver: Curran Associates Inc. , 2020. doi: 10.55553495724.3497487.

    [30] [30] TONG Z J, CHEN Y H, XU Z W, et al. WiseIoU: bounding box regression loss with dynamic focusing mechanism[JOL]. arXiv: 2301.10051. https:arxiv.gabs2301.10051.

    [31] [31] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]Proceedings of the 15th European Conference on Computer Vision–ECCV 2018. Munich: Springer, 2018. doi: 10.10079783030012342_1.

    [33] [33] HOU Q B, ZHOU D Q, FENG J S. Codinate attention f efficient mobile wk design[C]Proceedings of 2021 IEEECVF Conference on Computer Vision Pattern Recognition. Nashville: IEEE, 2021: 13708−13717. doi: 10.1109CVPR46437.2021.01350.

    [34] [34] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]Proceedings of the 31st International Conference on Neural Infmation Processing Systems. Long Beach: Curran Associates Inc. , 2017. doi: 10.55553295222.3295349.

    [35] [35] JOCHER G, CHAURASIA A, STOKEN A, et al. Ultralyticsyolov5: v7.0YOLOv5 SOTA realtime instance segmentation[EBOL]. (20200610)[20240426]. https:doi.g10.5281zenodo.3908559.

    [36] [36] WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: trainable bagoffreebies sets new stateoftheart f realtime object detects[C]Proceedings of 2023 IEEECVF Conference on Computer Vision Pattern Recognition. Vancouver: IEEE, 2023. doi: 10.1109CVPR52729.2023.00721.

    Tools

    Get Citation

    Copy Citation Text

    Zhendong ZHANG, Cong GUAN, Zehui ZHANG, Chao WU, Xuewen DING. Operation standardization evaluation method based on improved YOLOv8n for ship equipment disassembly and assembly[J]. Chinese Journal of Ship Research, 2025, 20(2): 140

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Ship Intelligent O&M, and Fault Diagnosis

    Received: Apr. 28, 2024

    Accepted: --

    Published Online: May. 15, 2025

    The Author Email:

    DOI:10.19693/j.issn.1673-3185.03902

    Topics