Optics and Precision Engineering, Volume. 32, Issue 6, 857(2024)
RGB-D SLAM method of dynamic scene based on instance segmentation and optical flow
A new method for improving the accuracy of camera pose estimation in RGB-D SLAM of dynamic scenes was proposed. This method was based on instance segmentation and optical flow. The first step was to detect objects in the scene using instance segmentation, eliminate non-rigid objects, and construct a semantic map. The second step involved calculating motion residuals through optical flow information, detecting dynamic rigid objects, and tracking them in the semantic map. Next, dynamic feature points on non-rigid objects and dynamic rigid objects in each frame were removed, and the camera pose was optimized using stable feature points. Finally, the static background was reconstructed using the TSDF model, and the dynamic rigid objects were displayed as point clouds. Tests conducted on the TUM and Bonn datasets demonstrate that Compared with the most advanced work ACEFusion, the method proposed in this article improves camera accuracy by approximately 43%. The results show that retaining feature points of dynamic rigid objects in a static state can significantly improve camera pose estimation results. The dense mapping experiments show that our method outperforms better in dynamic 3D reconstruction, the average reconstruction error is 0.042 m. Our code is available at
Get Citation
Copy Citation Text
Chenggen WANG, Jinlong SHI, Haowei ZHU, Suqin BAI, Yunhan SUN, Jiawen LU, Shucheng HUANG. RGB-D SLAM method of dynamic scene based on instance segmentation and optical flow[J]. Optics and Precision Engineering, 2024, 32(6): 857
Category:
Received: Sep. 7, 2023
Accepted: --
Published Online: Apr. 19, 2024
The Author Email: SHI Jinlong (shi_jinlong@163. com)