Laser Journal, Volume. 46, Issue 3, 105(2025)
A fusion method of point cloud and image based on multi feature matching
When unmanned detection platforms utilize the complementary advantages of LiDAR and visible light cameras, the effect of calibrating and fusing point clouds and color images solely based on feature points on the calibration object is not ideal, and multiple data collections are required to fit the results. A fusion method based on distributed automatic calibration of LiDAR and visible light camera is proposed to address the above issues, which enables the fused data to possess both the spatial stereoscopic characteristics of point clouds and the color texture information of visible light images. This method first calibrates the laser radar and camera through a planar calibration board to obtain the initial transformation relationship, then aligns them using the natural edge features in the common field of view of the two sensors, and finally fuses the two data to obtain a visualization result. Compared with the calibration fusion method based on manual matching and the automatic calibration fusion method based on planar calibration plates, the proposed method has improved accuracy by 33.8% and 23.1%, respectively, and the visualization results effectively restore the real scene.
Get Citation
Copy Citation Text
XU Hao, WANG Xiaoxia, YANG Fengbao. A fusion method of point cloud and image based on multi feature matching[J]. Laser Journal, 2025, 46(3): 105
Category:
Received: Sep. 21, 2024
Accepted: Jun. 12, 2025
Published Online: Jun. 12, 2025
The Author Email: WANG Xiaoxia (wangxiaoxia@nuc.edu.com)