Laser & Optoelectronics Progress, Volume. 57, Issue 20, 201012(2020)

Semantic Mapping Based on YOLOv3 and Visual SLAM

Bin Zou1,2, Siyang Lin1、*, and Zhishuai Yin1,2
Author Affiliations
  • 1Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, Hubei 430070, China
  • 2Hubei Collaborative Innovation Center for Automotive Components Technology, Wuhan, Hubei 430070, China
  • show less

    Visual simultaneous localization and mapping (SLAM) systems that use cameras as input can retain the spatial geometry information of a point cloud in the map construction process. However, such systems do not fully utilize the semantic information of objects in the environment. To address this problem, the mainstream visual SLAM system and object detection algorithms based on neural network structures, such as Faster R-CNN and YOLO, are investigated. Moreover, an effective point cloud segmentation method that adds supporting planes to improve the robustness of the segmentation results is considered. Finally, the YOLOv3 algorithm is combined with ORB-SLAM system to detect objects in the environment scene and ensures that the constructed point cloud map has semantic information. The experimental results demonstrate that the proposed method constructs a semantic map with complex geometric information that can be applied to the navigation of unmanned vehicles or mobile robots.

    Tools

    Get Citation

    Copy Citation Text

    Bin Zou, Siyang Lin, Zhishuai Yin. Semantic Mapping Based on YOLOv3 and Visual SLAM[J]. Laser & Optoelectronics Progress, 2020, 57(20): 201012

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image Processing

    Received: Dec. 24, 2019

    Accepted: Feb. 25, 2020

    Published Online: Oct. 13, 2020

    The Author Email: Lin Siyang (xyz5016@whut.edu.cn)

    DOI:10.3788/LOP57.201012

    Topics