Optics and Precision Engineering, Volume. 31, Issue 5, 656(2023)
Rotation-invariant 2D views-3D point clouds auto-encoder
The unsupervised representation learning of point clouds is crucial for understanding and analyzing point clouds, and a 3D reconstruction-based autoencoder is an important architecture in unsupervised learning. To address the rotation interference and insufficient feature learning capability of existing autoencoders, this study proposes a rotation-invariant 2D views-3D point clouds autoencoder. First, a local fusion global rotation-invariant feature conversion strategy is designed. For the local representation, the input point clouds are transformed into handcrafted rotation-invariant features; for the global representation, an alignment module based on PCA is proposed to align the rotating point clouds under the same pose to exclude the rotation interference while complementing the global information. Then, for the encoder, the local and non-local module are designed to fully extract the local spatial features and non-local contextual correlations of the point cloud and model the semantic consistency between different levels of features. Finally, a PCA alignment-based decoding method for 2D-3D reconstruction is proposed for reconstructing the aligned 3D point clouds and 2D views such that the point-cloud representation output from the encoder integrates rich learning signals from the 3D point clouds and 2D views. Experiments demonstrate that the recognition accuracies of this algorithm are 90.84% and 89.02% on the randomly rotated synthetic dataset ModelNet40 and real dataset ScanObjectNN, respectively. Moreover, the learned point-cloud representations achieve good discriminability without label supervision and have a good rotational robustness.
Get Citation
Copy Citation Text
Xianying LIU, Qiuxia WU, Wenxiong KANG, Yuqiong LI. Rotation-invariant 2D views-3D point clouds auto-encoder[J]. Optics and Precision Engineering, 2023, 31(5): 656
Category: Three-dimensional topographic mapping
Received: Jul. 27, 2022
Accepted: --
Published Online: Apr. 4, 2023
The Author Email: WU Qiuxia (qxwu@scut.edu.cn)