APPLIED LASER, Volume. 44, Issue 4, 113(2024)

Research on 3D Multimodal Mapping Based on Lidar-Vision Fusion

Yang Xudong, Lai Huige*, Kang Wen, Wang Peng, Tao Han, and Li Shaodong
Author Affiliations
  • School of Mechanical Engineering, Ningxia University, Yinchuan 750021, Ningxia, China
  • show less

    This paper addresses the challenges of uneven frame construction, large errors, and poor reconstruction of 3D maps for indoor environments using mobile robots. To address these issues, we introduce Camera Radar Net (CRN), a novel 3D map construction method that integrates LiDAR and RGB-Depth cameras. In CRN, a fusion algorithm of Lidar-Visual Inertial Odometry via Smoothing and Mapping (LVIO-SAM) is proposed, which will optimally estimate the spatial pose of the two-dimensional mobile platform. Then, the spatial pose data and the wheeled odometer are dynamically optimized by the Error State Kalman Filter (ESKF) algorithm to obtain a good mapping effect. Finally, a mobile robot was used for experimental verification. The experimental results show that compared with lidar inertial odometer and visual inertial odometer, the proposed method reduces the size error of 3D map by 22% and improves the odometer accuracy by 0.19% in the construction of indoor environment.

    Tools

    Get Citation

    Copy Citation Text

    Yang Xudong, Lai Huige, Kang Wen, Wang Peng, Tao Han, Li Shaodong. Research on 3D Multimodal Mapping Based on Lidar-Vision Fusion[J]. APPLIED LASER, 2024, 44(4): 113

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: May. 17, 2023

    Accepted: Dec. 13, 2024

    Published Online: Dec. 13, 2024

    The Author Email: Huige Lai (1491081634@qq.com)

    DOI:10.14128/j.cnki.al.20244404.113

    Topics