Chinese Journal of Lasers, Volume. 49, Issue 18, 1810003(2022)

Identifying and Constructing Semantic Maps Based on Laser and Vision Fusions for Improving Localization Performance

Lin Jiang1,2, Qi Liu1、*, Bin Lei1,2, Jianpeng Zuo1, and Hui Zhao2,3
Author Affiliations
  • 1Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China
  • 2Institute of Robotics and Intelligent Systems, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China
  • 3Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China
  • show less
    Figures & Tables(29)
    Overall framework of our algorithm
    Principle of laser point straight line fitting
    Straight line fitting. (a) Laser hit point; (b) straight line fitting
    Recognition of convex and concave corners. (a) Schematic of distinguishing convex and concave corners; (b) recognition result of convex and concave corners
    Semantic segmentation results. (a) Robot perspective; (b) semantic segmentation; (c) target detection
    Schematic of wall corner categories and sensor
    Wall corner category judgment under non-overlapping azimuth. (a)(c) Left limit view; (b)(d) right limit view
    Wall corner category judgment under overlapping azimuth. (a) Azimuth is 325°;(b) azimuth is 0°;(c) azimuth is 35°
    Wall corner directional judgment. (a)(e) Robot position; (b)(f) robot perspective; (c)(g) laser fitting line; (d)(h) wall corner category
    Joint calibration of camera and lidar
    Cabinet semantics obtained by pure semantic segmentation. (a) Robot perspective; (b) semantic segmentation result; (c) point cloud coordinate mapping result
    Cabinet semantics based on laser and vision. (a) Convex and concave wall corner recognition; (b) modified cabinet semantics; (c) camera depth value; (d) laser lidar depth value
    Overlap of semantic mapping results. (a)(d) Robot perspective; (b)(e) wall corner semantic mapping; (c)(f) object semantic mapping
    Convex and concave wall corner recognition in simulation environment. (a) Simulation environment; (b)-(d) convex and concave wall corner recognition
    Recognition of convex and concave corners. (a) Grid map; (b) concave and convex wall corner categories; (c) fused wall corner map
    Mobile robot platform and real environment. (a) Mobile robot platform; (b) real environment
    Recognition of convex and concave corners. (a) Grid map; (b) convex and concave wall corner categories; (c) four types of concave corners; (d) fused wall corner map
    Semantic map. (a) Object semantic map; (b) object semantic grid map; (c) final semantic grid map
    Real environment and semantic map. (a) Real environment; (b) grid map; (c) convex and concave wall corner categories; (d) four types of concave corners; (e) object semantic map; (f) final semantic grid map
    Environment map. (a) Raster map; (b) semantic map
    Particle convergence process of original AMCL algorithm
    Particle convergence process in localization using semantic map
    Schematics of robot pre-location based on semantic information. (a) Robot perspective; (b) wall corner category; (c) position of wall corner and robot; (d) particle distribution
    Particle convergence rate comparison
    • Table 1. Azimuth corresponding to each type of corner

      View table

      Table 1. Azimuth corresponding to each type of corner

      Number of cornerAzimuth /(°)
      135-55
      2305-325
      3215-235
      4125-145
    • Table 2. Wall corner category under overlapping azimuth

      View table

      Table 2. Wall corner category under overlapping azimuth

      θφ=145°-180°φ=180°-215°φ=55°-90°φ=90°-125°
      LeftRightLeftRightLeftRightLeftRight
      θ<35°
      θ>55°
    • Table 3. Corresponding color table of corners

      View table

      Table 3. Corresponding color table of corners

      Wall cornerConvex wall cornerConcave wall corner 1Concave wall corner 2Concave wall corner 3Concave wall corner 4
      Color     
    • Table 4. Corresponding color table of objects

      View table

      Table 4. Corresponding color table of objects

      ObjectCabinetDoorChairTrash can
      Color    
    • Table 5. [in Chinese]

      View table

      Table 5. [in Chinese]

      Location methodThe number of successful locationSuccess rate /%
      Location without semantic map525
      Location with semantic map1995
    Tools

    Get Citation

    Copy Citation Text

    Lin Jiang, Qi Liu, Bin Lei, Jianpeng Zuo, Hui Zhao. Identifying and Constructing Semantic Maps Based on Laser and Vision Fusions for Improving Localization Performance[J]. Chinese Journal of Lasers, 2022, 49(18): 1810003

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: remote sensing and sensor

    Received: Dec. 13, 2021

    Accepted: Jan. 19, 2022

    Published Online: Jul. 28, 2022

    The Author Email: Liu Qi (liuqi_xl@163.com)

    DOI:10.3788/CJL202249.1810003

    Topics