Acta Optica Sinica, Volume. 45, Issue 8, 0815002(2025)
Multi-Sensor Fusion SLAM Algorithm Based on Line Feature Optical Flow
With the rapid development of micro-electromechanical systems, the internet, and sensor technologies, robotics has advanced significantly and has been widely applied in manufacturing, rescue operations in hazardous environments, and field exploration. For a robot to autonomously navigate in unfamiliar environments, it must recognize its surroundings and determine its position, which enables it to move independently, adapt to complex conditions, and perform diverse tasks. This challenge is commonly addressed by a process known as simultaneous localization and mapping (SLAM). With the deepening research on SLAM problems, the scenarios faced by SLAM algorithm have shifted from structured indoor environments to weakly textured and unstructured complex environments. When faced with challenging complex scenarios, traditional single-sensor SLAM algorithms often suffer from insufficient feature extraction, inter-frame matching errors, and sensor degradation, which leads to reduced localization accuracy, trajectory drift, and inconsistent mapping. To address the aforementioned challenges, we present a multi-sensor SLAM algorithm that fuses lidar, vision, and inertial measurements by integrating both point- and line-feature optical flow tracking. The proposed method aims to improve the localization accuracy and robustness of SLAM algorithms when operating in weakly textured, unstructured environments.
The proposed algorithm integrates a visual-inertial system (VIS) incorporating line feature optical flow and a LiDAR-inertial system (LIS). In the front end of VIS, the improved ELSED algorithm is employed to extract robust line features from the environment, which enables adaptive parameter adjustment and ensures the uniform distribution of extracted features. Line features are tracked across consecutive frames using the LK optical flow, with image pyramids utilized to enhance the stability of inter-frame tracking for line features. In the backend of the visual system, residuals are computed based on the angular deviation of line feature reprojection, and a sliding-window bundle adjustment (BA) is employed to optimize point and line feature residuals, which achieves feature fusion and estimates the robot’s coarse pose. In LIS, the spatial coordinates of line feature endpoints are obtained through LiDAR point cloud projection, which accelerates the initialization of visual features. Factor graph optimization is applied in the backend to refine the robot’s precise pose by combining residuals from the inertial measurement unit, LiDAR point clouds, and prior information, which help to adjust the coarse pose initially estimated by the VIS algorithm. The two algorithms jointly perform loop closure detection: VIS identifies potential loop closure frames using a bag-of-words model, while LIS retrieves the nearest loop closure candidate frames in terms of Euclidean distance using a KD tree. Candidate frames are refined through scan matching to select the optimal loop closure frames, which are then added to the factor graph as new constraints.
The line feature tracking experiment (Fig. 4) demonstrates that the ELSED+LK strategy extracts and tracks more line feature pairs compared to the LSD+LBD matching strategy. The retained line feature pairs, after edge detection and nearest-neighbor filtering, are sufficient to establish constraints for backend optimization. Moreover, the ELSED+LK strategy requires only approximately 90% of the time consumed by traditional extraction and tracking strategies (Table 1), thus offering superior real-time performance. In the multi-scenario experiments using the M2DGR dataset (Table 3), compared to a single-sensor inertial system, the proposed multi-sensor fusion system exhibits significant advantages. The fusion of OFPL-VIS and LIS improves the system’s localization accuracy. Compared to FAST-LIO2, the proposed algorithm achieves a 41.66% improvement in average localization accuracy without utilizing loop closure detection. Furthermore, the introduction of visual line features enhances the system’s robustness in handling weakly textured and unstructured environments. Compared to the LVI-SAM algorithm, the proposed algorithm improves average localization accuracy by 11.53% and 18.36% in scenarios without and with loop closure, respectively. A real-time performance analysis of the proposed algorithm and LVI-SAM on the same sequence (Table 4) reveals that, although the inclusion of line feature optical flow increases the processing time for visual odometry and backend optimization, the frame rate of the proposed algorithm still exceeds the measurement frequencies of both the camera and LiDAR, meeting real-time requirements.
We propose a multi-sensor fusion SLAM algorithm integrating line feature optical flow, offering a novel solution for autonomous robot localization and mapping in complex environments. The algorithm employs an improved ELSED algorithm and LK optical flow tracking to extract and track more robust line features from the environment, enhancing the visual system’s robustness in weakly textured and unstructured environments. Compared to traditional single-sensor SLAM algorithms, multi-sensor fusion enhances the robot’s environmental perception, ensuring its capability for autonomous navigation and operation across diverse environments. Experiments demonstrate that the multi-sensor SLAM algorithm with integrated line feature optical flow not only extracts and tracks line features at a higher frequency than traditional methods in weakly textured environments but also achieves more accurate localization and mapping in complex scenarios. In conclusion, the proposed algorithm integrates line and point features with LiDAR point clouds, which significantly improves localization accuracy in dynamic scenarios and complex environments. This work establishes a critical technical foundation for the broader application of robots.
Get Citation
Copy Citation Text
Yuanbin Chi, Xiangyin Meng, Shide Xiao, Xiujie Lu, Shouye Wu. Multi-Sensor Fusion SLAM Algorithm Based on Line Feature Optical Flow[J]. Acta Optica Sinica, 2025, 45(8): 0815002
Category: Machine Vision
Received: Dec. 7, 2024
Accepted: Feb. 17, 2025
Published Online: Apr. 27, 2025
The Author Email: Xiangyin Meng (xymeng@swjtu.edu.cn)
CSTR:32393.14.AOS241859