Laser & Optoelectronics Progress, Volume. 60, Issue 20, 2028009(2023)
Adaptive Tightly Coupled Lidar-Visual Simultaneous Localization and Mapping Framework
[1] Leng L M, Zeng Z B, Wu G H et al. The phase calibration for integrated optical phased arrays using an artificial neural networks with resolved phase ambiguity[J]. Photonics Research, 10, 347-356(2022).
[2] Hu C Y, Huang H, Chen M H et al. FourierCam: a camera for video spectrum acquisition in a single shot[J]. Photonics Research, 9, 701-713(2021).
[3] Teng J, Hu C Y, Huang H et al. Single-shot 3D tracking based on a polarization multiplexed Fourier-phase camera[J]. Photonics Research, 9, 1924-1930(2021).
[4] Mur-Artal R, Tardós J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 33, 1255-1262(2017).
[5] Zhang J, Singh S. LOAM: lidar odometry and mapping in real-time[J]. Robotics: Science and Systems, 2, 1-9(2014).
[6] Shan T X, Englot B. LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain[C], 4758-4765(2019).
[7] Qin T, Li P L, Shen S J. VINS-mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 34, 1004-1020(2018).
[8] Zhang J, Singh S. Laser-visual-inertial odometry and mapping with high robustness and low drift[J]. Journal of Field Robotics, 35, 1242-1264(2018).
[9] Zuo X, Geneva P, Lee W et al. LIC-fusion: LiDAR-inertial-camera odometry[C], 5848-5854(2020).
[10] Zuo X, Yang Y L, Geneva P et al. LIC-fusion 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking[C], 5112-5119(2021).
[11] Zhang J, Kaess M, Singh S. Real-time depth -enhanced monocular odometry[C], 4973-4980(2014).
[12] Graeter J, Wilczynski A, Lauer M. LIMO: lidar-monocular visual odometry[C], 7872-7879(2019).
[13] Shan T X, Englot B, Ratti C et al. LVI-SAM: tightly-coupled lidar-visual-inertial odometry via smoothing and mapping[C], 5692-5698(2021).
[14] Jia Y P, Luo H Y, Zhao F et al. Lvio-fusion: a self-adaptive multi-sensor fusion SLAM framework using actor-critic method[C], 286-293(2021).
[15] Kim G, Kim A. Scan context: egocentric spatial descriptor for place recognition within 3D point cloud map[C], 4802-4809(2019).
[16] Hartley R, Zisserman A[M]. Multiple view geometry in computer vision(2003).
[17] Rusu R B, Cousins S. 3D is here: point cloud library (PCL)[C](2011).
[18] Su Y, Wang T, Yao C et al. GR-SLAM: vision-based sensor fusion SLAM for ground robots on complex terrain[C], 5096-5103(2021).
[19] Wang H, Wang C, Xie L H. Intensity scan context: coding intensity and geometry relations for loop closure detection[C], 2095-2101(2020).
[20] Dellaert F. Factor graphs and GTSAM: a hands-on introduction[R](2012).
[21] Geiger A, Lenz P, Stiller C et al. Vision meets robotics: the KITTI dataset[J]. International Journal of Robotics Research, 32, 1231-1237(2013).
[22] Caesar H, Bankiti V, Lang A H et al. nuScenes: a multimodal dataset for autonomous driving[C], 11618-11628(2020).
[24] Wang H, Wang C, Chen C L et al. F-LOAM: fast LiDAR odometry and mapping[C], 4390-4396(2021).
[25] Yokozuka M, Koide K, Oishi S et al. LiTAMIN: LiDAR-based tracking and mapping by stabilized ICP for geometry approximation with normal distributions[C], 5143-5150(2021).
Get Citation
Copy Citation Text
Weichao Zhou, Jun Huang. Adaptive Tightly Coupled Lidar-Visual Simultaneous Localization and Mapping Framework[J]. Laser & Optoelectronics Progress, 2023, 60(20): 2028009
Category: Remote Sensing and Sensors
Received: Nov. 30, 2022
Accepted: Feb. 6, 2023
Published Online: Oct. 13, 2023
The Author Email: Jun Huang (huangj@sari.ac.cn)