Optics and Precision Engineering, Volume. 30, Issue 10, 1189(2022)
Single-view 3D object reconstruction based on NFFD and graph convolution
[1] JIN Y W, JIANG D Q, CAI M. 3D reconstruction using deep learning: a survey[J]. Communications in Information and Systems, 20, 389-413(2020).
[2] FAHIM G, AMIN K, ZARIF S. Single-View 3D reconstruction: a Survey of deep learning methods[J]. Computers & Graphics, 94, 164-190(2021).
[3] HENDERSON P, FERRARI V. Learning single-image 3D reconstruction by generative modelling of shape, pose and shading[J]. International Journal of Computer Vision, 128, 835-854(2020).
[4] CHOY C B, XU D F, GWAK J et al. 3
[5] [5] 5李雷, 徐浩, 吴素萍. 基于DDPG的三维重建模糊概率点推理[J]. 自动化学报, 2022, 48(4): 1105-1118.LIL, XUH, WUS P. Fuzzy Probability Points Reasoning for 3D Reconstruction via Deep Deterministic Policy Gradient[J]. Acta Automatica Sinica, 2022, 48(4): 1105-1118. (in Chinese)
[6] [6] 6夏清, 李帅, 郝爱民, 等. 基于深度学习的数字几何处理与分析技术研究进展[J]. 计算机研究与发展, 2019, 56(1): 155-182.XIAQ, LIS, HAOA M, et al. Deep learning for digital geometry processing and analysis: a review[J]. Journal of Computer Research and Development, 2019, 56(1): 155-182.(in Chinese)
[7] CHENG Q Q, SUN P Y, YANG C S et al. A morphing-Based 3D point cloud reconstruction framework for medical image processing[J]. Computer Methods and Programs in Biomedicine, 193, 105495(2020).
[8] JIN P, LIU S L, LIU J H et al. Weakly-supervised single-view dense 3D point cloud reconstruction via differentiable renderer[J]. Chinese Journal of Mechanical Engineering, 34, 93(2021).
[9] FAN H Q, SU H, GUIBAS L. A point set generation network for 3D object reconstruction from a single image[C], 2463-2471(2017).
[10] ZHANG S F, LIU J, LIU Y H et al. DIMNet: Dense implicit function network for 3D human body reconstruction[J]. Computers & Graphics, 98, 1-10(2021).
[11] YANG B, ROSA S, MARKHAM A et al. Dense 3D object reconstruction from a single depth view[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41, 2820-2834(2019).
[12] WU Z H, PAN S R, CHEN F W et al. A comprehensive survey on graph neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 32, 4-24(2021).
[13] VALSESIA D, FRACASTORO G, MAGLI E. Learning localized representations of point clouds with graph-convolutional generative adversarial networks[J]. IEEE Transactions on Multimedia, 23, 402-414(2021).
[14] WANG N Y, ZHANG Y D, LI Z W et al. Pixel2Mesh: generating 3D mesh models from single RGB images[C], 52-67(2008).
[15] NGUYEN D, CHOI S, KIM W et al. GraphX-convolution for point cloud deformation in 2D-to-3D conversion[C], 8627-8636(2019).
[16] KURENKOV A, JI J W, GARG A et al. DeformNet: free-form deformation network for 3D shape reconstruction from a single image[C], 858-866(2018).
[17] PONTES J K, KONG C, SRIDHARAN S et al.
[18] LAMOUSIN H J, WAGGENSPACK N N. NURBS-based free-form deformations[J]. IEEE Computer Graphics and Applications, 14, 59-65(1994).
[19] TAO J, SUN G, SI J Z et al. A robust design for a winglet based on NURBS-FFD method and PSO algorithm[J]. Aerospace Science and Technology, 70, 568-577(2017).
[20] ORAZI L, REGGIANI B. Point inversion for triparametric NURBS[J]. International Journal on Interactive Design and Manufacturing (IJIDeM), 15, 55-61(2021).
[21] [21] 21孟月波, 金丹, 刘光辉, 等. 共享核空洞卷积与注意力引导FPN文本检测[J]. 光学 精密工程, 2021, 29(8): 1955-1967. doi: 10.37188/OPE.20212908.1955MENGY B, JIND, LIUG H, et al. Text detection with kernel-sharing dilated convolutions and attention-guided FPN[J]. Opt. Precision Eng., 2021, 29(8): 1955-1967.(in Chinese). doi: 10.37188/OPE.20212908.1955
[22] [22] 22李经宇, 杨静, 孔斌, 等. 基于注意力机制的多尺度车辆行人检测算法[J]. 光学 精密工程, 2021, 29(6): 1448-1458. doi: 10.37188/OPE.20212906.1448LIJ Y, YANGJ, KONGB, et al. Multi-scale vehicle and pedestrian detection algorithm based on attention mechanism[J]. Opt. Precision Eng., 2021, 29(6): 1448-1458.(in Chinese). doi: 10.37188/OPE.20212906.1448
[23] [23] 23蔡体健, 彭潇雨, 石亚鹏, 等. 通道注意力与残差级联的图像超分辨率重建[J]. 光学 精密工程, 2021, 29(1): 142-151. doi: 10.37188/OPE.20212901.0142CAIT J, PENGX Y, SHIY P, et al. Channel attention and residual concatenation network for image super-resolution[J]. Opt. Precision Eng., 2021, 29(1): 142-151.(in Chinese). doi: 10.37188/OPE.20212901.0142
[24] [24] 24秦传波, 宋子玉, 曾军英, 等. 联合多尺度和注意力-残差的深度监督乳腺癌分割[J]. 光学 精密工程, 2021, 29(4): 877-895. doi: 10.37188/OPE.20212904.0877QINC B, SONGZ Y, ZENGJ Y, et al. Deeply supervised breast cancer segmentation combined with multi-scale and attention-residuals[J]. Opt. Precision Eng., 2021, 29(4): 877-895.(in Chinese). doi: 10.37188/OPE.20212904.0877
[25] MA J Y, ZHANG H, YI P et al. SCSCN: a separated channel-spatial convolution net with attention for single-view reconstruction[J]. IEEE Transactions on Industrial Electronics, 67, 8649-8658(2020).
[27] CHANG A X, FUNKHOUSER T A, GUIBAS L J et al. ShapeNet: an information-rich 3D model repository[J]. CoRR(2015).
[28] SUN X Y, WU J J, ZHANG X M et al. Pix3D: dataset and methods for single-image 3D shape modeling[C], 2974-2983(2018).
[29] MANDIKAL P, MURTHY N, AGARWAL M et al. 3D-LMNet: latent embedding matching for accurate and diverse 3D point cloud reconstruction from a single image[C], 55-56(2018).
[30] MESCHEDER L, OECHSLE M, NIEMEYER M et al. Occupancy networks: learning 3D reconstruction in function space[C], 4455-4465(2019).
[31] XU Q G, WANG W Y, CEYLAN D et al. DISN: deep implicit surface network for high-quality single-view 3D reconstruction[J]. Advances in Neural Information Processing Systems, 32, 492-502(2019).
[32] AFIFI A J, MAGNUSSON J, SOOMRO T A et al. Pixel2point: 3D object reconstruction from a single image using CNN and initial sphere[J]. IEEE Access, 9, 110-121(2020).
[33] JACK D, PONTES J K, SRIDHARAN S et al. Learning free-form deformations for 3D object reconstruction[C], 317-333(2019).
Get Citation
Copy Citation Text
Yuanfeng LIAN, Shoushuang PEI, Wei HU. Single-view 3D object reconstruction based on NFFD and graph convolution[J]. Optics and Precision Engineering, 2022, 30(10): 1189
Category: Information Sciences
Received: Nov. 10, 2021
Accepted: --
Published Online: Jun. 1, 2022
The Author Email: Yuanfeng LIAN (lianyuanfeng@cup.edu.cn)