Optics and Precision Engineering, Volume. 31, Issue 5, 656(2023)

Rotation-invariant 2D views-3D point clouds auto-encoder

Xianying LIU1... Qiuxia WU1,*, Wenxiong KANG2 and Yuqiong LI3 |Show fewer author(s)
Author Affiliations
  • 1School of Software Engineering, South China University of Technology, Guangzhou50006, China
  • 2School of Automation Science and Engineering, South China University of Technology, Guangzhou510641, China
  • 3Institute of Mechanics, Chinese Academy of Sciences, Beijing100190, China
  • show less

    The unsupervised representation learning of point clouds is crucial for understanding and analyzing point clouds, and a 3D reconstruction-based autoencoder is an important architecture in unsupervised learning. To address the rotation interference and insufficient feature learning capability of existing autoencoders, this study proposes a rotation-invariant 2D views-3D point clouds autoencoder. First, a local fusion global rotation-invariant feature conversion strategy is designed. For the local representation, the input point clouds are transformed into handcrafted rotation-invariant features; for the global representation, an alignment module based on PCA is proposed to align the rotating point clouds under the same pose to exclude the rotation interference while complementing the global information. Then, for the encoder, the local and non-local module are designed to fully extract the local spatial features and non-local contextual correlations of the point cloud and model the semantic consistency between different levels of features. Finally, a PCA alignment-based decoding method for 2D-3D reconstruction is proposed for reconstructing the aligned 3D point clouds and 2D views such that the point-cloud representation output from the encoder integrates rich learning signals from the 3D point clouds and 2D views. Experiments demonstrate that the recognition accuracies of this algorithm are 90.84% and 89.02% on the randomly rotated synthetic dataset ModelNet40 and real dataset ScanObjectNN, respectively. Moreover, the learned point-cloud representations achieve good discriminability without label supervision and have a good rotational robustness.

    Tools

    Get Citation

    Copy Citation Text

    Xianying LIU, Qiuxia WU, Wenxiong KANG, Yuqiong LI. Rotation-invariant 2D views-3D point clouds auto-encoder[J]. Optics and Precision Engineering, 2023, 31(5): 656

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Three-dimensional topographic mapping

    Received: Jul. 27, 2022

    Accepted: --

    Published Online: Apr. 4, 2023

    The Author Email: WU Qiuxia (qxwu@scut.edu.cn)

    DOI:10.37188/OPE.20233105.0656

    Topics