Optics and Precision Engineering, Volume. 32, Issue 5, 752(2024)
Binocular vision SLAM with fused point and line features in weak texture environment
[1] [1] 权美香, 朴松昊, 李国. 视觉SLAM综述[J]. 智能系统学报, 2016, 11(6): 768-776. doi: 10.11992/tis.201607026QUANM X, PIAOS H, LIG. Overview of Visual SLAM [J]. CAAI Transactions on Intelligent Systems, 2016, 11(6): 768-776.(in Chinese). doi: 10.11992/tis.201607026
[2] [2] 张裕, 张越, 张宁, 等. 基于逆深度滤波的双目折反射全景相机动态SLAM系统[J]. 光学 精密工程, 2022, 30(11): 1282-1289. doi: 10.37188/ope.20223011.1282ZHANGY, ZHANGY, ZHANGN, et al. Dynamic SLAM of binocular catadioptric panoramic camera based on inverse depth filter[J]. Opt. Precision Eng., 2022, 30(11): 1282-1289.(in Chinese). doi: 10.37188/ope.20223011.1282
[3] [3] 赵良玉, 金瑞, 朱叶青, 等. 基于点线特征融合的双目惯性SLAM算法[J]. 航空学报, 2022, 43(3): 355-369. doi: 10.7527/j.issn.1000-6893.2022.3.hkxb202203029ZHAOL Y, JINR, ZHUY Q, et al. Stereo visual-inertial SLAM algorithm based on merge of point and line features[J]. Acta Aeronautica et Astronautica Sinica, 2022, 43(3): 355-369.(in Chinese). doi: 10.7527/j.issn.1000-6893.2022.3.hkxb202203029
[4] [4] 周佳乐, 朱兵, 吴芝路. 融合二维图像和三维点云的相机位姿估计[J]. 光学 精密工程, 2022, 30(22): 2901-2912. doi: 10.37188/ope.20223022.2901ZHOUJ L, ZHUB, WUZH L. Camera pose estimation based on 2D image and 3D point cloud fusion[J]. Opt. Precision Eng., 2022, 30(22): 2901-2912.(in Chinese). doi: 10.37188/ope.20223022.2901
[5] [5] 贾晓雪, 赵冬青, 张乐添, 等. 基于自适应惯导辅助特征匹配的视觉SLAM算法[J]. 光学 精密工程, 2023, 31(5): 621-630. doi: 10.37188/OPE.20233105.0621JIAX X, ZHAOD Q, ZHANGL T, et al. A visual SLAM algorithm based on adaptive inertial navigation assistant feature matching[J]. Opt. Precision Eng., 2023, 31(5): 621-630.(in Chinese). doi: 10.37188/OPE.20233105.0621
[6] [6] 李海丰, 胡遵河, 陈新伟. PLP-SLAM: 基于点、线、面特征融合的视觉SLAM方法[J]. 机器人, 2017, 39(2): 214-220, 229. doi: 10.13973/j.cnki.robot.2017.0214LIH F, HUZ H, CHENX W. PLP-SLAM: a visual SLAM method based on point-line-plane feature fusion[J]. Robot, 2017, 39(2): 214-220, 229.(in Chinese). doi: 10.13973/j.cnki.robot.2017.0214
[7] MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 31, 1147-1163(2015).
[8] MUR-ARTAL R, TARDÓS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 33, 1255-1262(2017).
[9] CAMPOS C, ELVIRA R, RODRIGUEZ J J G et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 37, 1874-1890(2021).
[10] QIN T, LI P L, SHEN S J. VINS-mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 34, 1004-1020(2018).
[11] ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 611-625(2018).
[12] FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: fast semi-direct monocular visual odometry[C], 15-22(2014).
[13] YUNUS R, LI Y Y, TOMBARI F. ManhattanSLAM: robust planar tracking and mapping leveraging mixture of Manhattan frames[C], 6687-6693(2021).
[14] WEI H, TANG F L, XU Z W et al. A point-line VIO system with novel feature hybrids and with novel line predicting-matching[J]. IEEE Robotics and Automation Letters, 6, 8681-8688(2021).
[15] GOMEZ-OJEDA R, MORENO F A, ZUNIGA-NOEL D et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 35, 734-746(2019).
[16] COMPANY-CORCOLES J P, GARCIA-FIDALGO E, ORTIZ A. MSC-VO: exploiting Manhattan and structural constraints for visual odometry[J]. IEEE Robotics and Automation Letters, 7, 2803-2810(2022).
[17] GROMPONE VON GIOI R, JAKUBOWICZ J, MOREL J M et al. LSD: a line segment detector[J]. Image Processing On Line, 2, 35-55(2012).
[18] HE Y J, ZHAO J, GUO Y et al. PL-VIO: tightly-coupled monocular visual-inertial odometry using point and line features[J]. Sensors, 18, 1159(2018).
[19] GOMEZ-OJEDA R, BRIALES J, GONZALEZ-JIMENEZ J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C], 4211-4216(2016).
[20] WEI H, TANG F L, ZHANG C F et al. Highly efficient line segment tracking with an IMU-KLT prediction and a convex geometric distance minimization[C], 3999-4005(2021).
[21] ZHANG L L, KOCH R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation, 24, 794-805(2013).
[22] KIM P, COLTIN B, KIM H J. Low-drift visual odometry in structured environments by decoupling rotational and translational motion[C], 7247-7253(2018).
[23] ZHANG T, LIU C J, LI J Q et al. A new visual inertial simultaneous localization and mapping (SLAM) algorithm based on point and line features[J]. Drones, 6, 23(2022).
[24] BURRI M, NIKOLIC J, GOHL P et al. The EuRoC micro aerial vehicle datasets[J]. International Journal of Robotics Research, 35, 1157-1163(2016).
[25] MENZE M, GEIGER A. Object scene flow for autonomous vehicles[C], 7, 3061-3070(2015).
Get Citation
Copy Citation Text
Kun GONG, Xin XU, Xiaoqing CHEN, Yuelei XU, Zhaoxiang ZHANG. Binocular vision SLAM with fused point and line features in weak texture environment[J]. Optics and Precision Engineering, 2024, 32(5): 752
Category:
Received: Apr. 21, 2023
Accepted: --
Published Online: Apr. 2, 2024
The Author Email: Yuelei XU (xuyuelei@nwpu.edu.cn)