Computer Engineering, Volume. 51, Issue 8, 305(2025)

Monocular Visual-Inertial Simultaneous Localization and Mapping Method Based on Feature Collaboration

WANG Hao1,2, AI Kecheng1,2, and ZHANG Quanyi3、*
Author Affiliations
  • 1School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
  • 2Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei University of Technology, Hefei 230009, Anhui, China
  • 3Anhui Provincial High-tech Development Center (Anhui Basic Research Management Center), Hefei 230091, Anhui, China
  • show less
    References(23)

    [1] [1] FANG B F, MEI G F, YUAN X H, et al. Visual SLAM for robot navigation in healthcare facility[J]. Pattern Recognition, 2021, 113: 107822.

    [2] [2] BROSSARD M, BARRAU A, BONNABEL S, et al. AI-IMU dead-reckoning[J]. IEEE Transactions on Intelligent Vehicles, 2020, 5(4): 585-595.

    [3] [3] RAJ T, HASHIM F H, HUDDIN A B, et al. A survey on LiDAR scanning mechanisms[J]. Electronics, 2020, 9(5): 741.

    [4] [4] JIA Y, YAN X, XU Y. A survey of simultaneous localization and mapping for robot[C]//Proceedings of IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). Washington D.C., USA: IEEE Press, 2019: 857-861.

    [6] [6] WANG Z, WU Y, NIU Q. Multi-sensor fusion in automated driving: a survey[J]. IEEE Access, 2020, 8: 2847-2868.

    [7] [7] HUANG G. Visual-inertial navigation: a concise review[C]//Proceedings of 2019 International Conference on Robotics and Automation (ICRA). Washington D.C., USA: IEEE Press, 2019: 9572-9582.

    [8] [8] ZHOU D, DAI Y, LI H. Ground-plane-based absolute scale estimation for monocular visual odometry[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 21(2): 791-802.

    [9] [9] MACTAVISH K, BARFOOT T D. At all costs: a comparison of robust cost functions for camera correspondence outliers[C]//Proceedings of 2015 12th Conference on Computer and Robot Vision. Washington D.C., USA: IEEE Press, 2015: 62-69.

    [10] [10] QIN T, LI P, SHEN S. VINS-Mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.

    [11] [11] QIN T, PAN J, CAO S, et al. A general optimization-based framework for local odometry estimation with multiple sensors[EB/OL]. (2019-01-11) [2023-06-06]. https://doi.org/10.48550/arXiv.1901.03638.

    [12] [12] CAO S, LU X, SHEN S. GVINS: tightly coupled GNSS-visual-inertial fusion for smooth and consistent state estimation[J]. IEEE Transactions on Robotics, 2022, 38(4): 2004-2021.

    [13] [13] CAMPOS C, ELVIRA R, RODRGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.

    [14] [14] HE Y, ZHAO J, GUO Y, et al. PL-VIO: tightly-coupled monocular visual-inertial odometry using point and line features[J]. Sensors, 2018, 18(4): 1159.

    [15] [15] VON GIOI R G, JAKUBOWICZ J, MOREL J M, et al. LSD: a line segment detector[J]. Image Processing on Line, 2012, 2: 35-55.

    [16] [16] ZOU D, WU Y, PEI L, et al. StructVIO: visual-inertial odometry with structural regularity of man-made environments[J]. IEEE Transactions on Robotics, 2019, 35(4): 999-1013.

    [17] [17] FU Q, WANG J, YU H, et al. Pl-VINS: real-time monocular visual-inertial SLAM with point and line[EB/OL]. (2022-04-15) [2023-06-06]. https://arxiv.org/abs/2009.07462v3.

    [18] [18] ZHAO Z, SONG T, XING B, et al. PLI-VINS: visual-inertial slam based on point-line feature fusion in indoor environment[J]. Sensors, 2022, 22(14): 5457.

    [19] [19] XU L, YIN H, SHI T, et al. EPLF-VINS: real-time monocular visual-inertial SLAM with efficient point-line flow features[J]. IEEE Robotics and Automation Letters, 2023, 8(2): 752-759.

    [20] [20] CHANG L, NIU X, LIU T. GNSS/IMU/ODO/LiDAR-SLAM integrated avigation system using IMU/ODO pre-integration[J]. Sensors, 2020, 20(17): 4702.

    [22] [22] TSAKIRIS M C. Low-rank matrix completion theory via Plcker coordinates[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 10084-10099.

    [23] [23] BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10): 1157-1163.

    [24] [24] SCHUBERT D, GOLL T, DEMMEL N, et al. The TUM Ⅵ benchmark for evaluating visual-inertial odometry[EB/OL]. (2020-03-09) [2023-06-06]. https://arxiv.org/abs/1804.06120.

    [25] [25] GRUPP M. Evo: Python package for the evaluation of odometry and SLAM[EB/OL]. (2017-12-01) [2023-06-06]. https://github.com/MichaelGrupp/evo.

    Tools

    Get Citation

    Copy Citation Text

    WANG Hao, AI Kecheng, ZHANG Quanyi. Monocular Visual-Inertial Simultaneous Localization and Mapping Method Based on Feature Collaboration[J]. Computer Engineering, 2025, 51(8): 305

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Jan. 18, 2024

    Accepted: Aug. 26, 2025

    Published Online: Aug. 26, 2025

    The Author Email: ZHANG Quanyi (313159623@qq.com)

    DOI:10.19678/j.issn.1000-3428.0069250

    Topics