Laser & Optoelectronics Progress, Volume. 62, Issue 4, 0415001(2025)
Point Cloud Feature Extraction Network Based on Multiscale Feature Dynamic Fusion
Accurate feature extraction in point cloud registration is often hindered by noise, surface complexity, overlap, and scale differences, which limit improvements in registration. To address this issue, this study proposes a point cloud registration algorithm based on the dynamic fusion of multiscale features. First, by employing sparse convolution operations at different depths, multilevel scale feature information is extracted from the point cloud data, obtaining rich levels of detail from local and global structures. Subsequently, the multilevel scale features are concatenated to form a fused feature representation, which enhances the integrity and accuracy of features. Additionally, the algorithm introduces a squeeze-excitation attention mechanism for the network skip connections to adaptively learn and reinforce important feature information. Concurrently, a global context module is integrated at the residual position to better capture global structural information. Finally, registration is completed by estimating the rigid transformation matrix through the random sample consensus (RANSAC) algorithm. Experimental results demonstrate significant advantages in feature extraction and registration accuracy compared to mainstream methods, effectively improving the performance of point cloud registration.
Get Citation
Copy Citation Text
Jing Liu, Yuan Zhang, Le Zhang, Bo Li, Xiaowen Yang. Point Cloud Feature Extraction Network Based on Multiscale Feature Dynamic Fusion[J]. Laser & Optoelectronics Progress, 2025, 62(4): 0415001
Category: Machine Vision
Received: May. 8, 2024
Accepted: Jun. 19, 2024
Published Online: Feb. 14, 2025
The Author Email:
CSTR:32186.14.LOP241237