Acta Optica Sinica, Volume. 44, Issue 14, 1415001(2024)

Three-Dimensional Point Cloud Registration Network Based on Deep Interactive Multi-Scale Receptive Field Feature Learning

Han Zhou1, Xuchu Wang1,2、*, and Yue Yuan1
Author Affiliations
  • 1College of Optoelectronic Engineering, Chongqing University, Chongqing 400040, China
  • 2Key Laboratory of Optoelectronic Technology & Systems (Chongqing University), Ministry of Education, Chongqing 400040, China
  • show less

    Objective

    We aim to enhance the performance of point cloud registration tasks. In recent years, attention mechanisms have shown great potential in 3D vision tasks such as point cloud registration. Currently, the lack of depth interaction during the feature extraction stage may result in the loss of important latent similar structures, thus degrading the performance under low-overlap scenarios. To this end, we propose a 3D point cloud registration network called DIM-RFNet based on deep interactive multi-scale receptive field features, which combines structural context consistency to identify latent similar structure features for efficient point cloud registration.

    Methods

    The proposed DIM-RFNet model includes two stages. In the coarse registration stage, the sampled point cloud is input into the neighborhood patch feature extraction module to obtain the neighborhood patches and feature information matrix. Then, the information is fed into the context structure encoder, embedding the neighborhood patches into a high-dimensional space and aggregating different-scale features. These features are further input into a transformer to update the high-dimensional features. The context structure decoder continuously expresses the neighborhood patches and corresponding high-dimensional features using a multi-layer perceptron (MLP), ultimately outputting a set of key points and their dimension-reduced structural features. In the fine registration stage, the key points and features obtained during the coarse registration stage are input into the overlap relation encoder, which employs structural feature cross-attention and self-attention to predict pairs of points with overlapping relations, leading to an overlap relation confidence matrix. The top K pairs with the highest overlap relation confidence are selected and input into the overlap relation decoder, which outputs feature representation, overlap score, and match score.

    Results and Discussions

    Our method is extensively evaluated on synthetic datasets ModelNet40 and ModelLoNet. The experiments demonstrate that DIM-RFNet outperforms other comparison methods in registration time error (RTE) and correspondence distance (CD) for highly overlapped ModelNet40. Experiments on real indoor scene datasets 3DMatch and 3DLoMatch indicate DIM-RFNet’s ability to reliably predict overlap relations under low-overlap scenarios. Experiments on the real outdoor scene OdometryKITTI dataset reveal that DIM-RFNet’s performance on rotation root mean square error (RRE) and translation root mean square error (RR) is superior to other methods, proving DIM-RFNet’s suitability for large-scale outdoor scenes.

    Conclusions

    We introduce DIM-RFNet based on deep interactive multi-scale receptive field features. DIM-RFNet adopts a coarse-to-fine registration strategy, leveraging graph structure and edge information from unordered points to obtain neighborhood patches and feature information matrices. Meanwhile, the proposed DIM-RFNet is evaluated on public ModelNet, ModelLoNet, 3DMatch, 3DLoMatch, and OdometryKITTI datasets, and comparative experiments demonstrate that it has yielded competitive improvement under low-overlap scenarios.

    Keywords
    Tools

    Get Citation

    Copy Citation Text

    Han Zhou, Xuchu Wang, Yue Yuan. Three-Dimensional Point Cloud Registration Network Based on Deep Interactive Multi-Scale Receptive Field Feature Learning[J]. Acta Optica Sinica, 2024, 44(14): 1415001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Jan. 19, 2024

    Accepted: Apr. 15, 2024

    Published Online: Jul. 4, 2024

    The Author Email: Xuchu Wang (xcwang@cqu.edu.cn)

    DOI:10.3788/AOS240529

    CSTR:32393.14.AOS240529

    Topics