Laser & Optoelectronics Progress, Volume. 59, Issue 14, 1415020(2022)

3D Reconstruction and Accuracy Evaluation of Ancient Chinese Architectural Patches Based on Depth Learning from Single Image

Lihua Hu1,2, Wenzhuang Yin1,2, Siyuan Xing2, Jifu Zhang1, Qiulei Dong2, and Zhanyi Hu2、*
Author Affiliations
  • 1College of Computer Sciences and Technology, Taiyuan University of Science and Technology, Taiyuan 030024, Shanxi , China
  • 2National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
  • show less
    Figures & Tables(15)
    Schematic of learning depth algorithm using stereo image pairs
    Schematic of learning depth algorithm using online motion estimation
    Schematic of direct depth comparison algorithm
    Flowchart of 3D point cloud comparison
    Depth maps estimated by stereo image pair method
    Some typical architectural parts for testing
    Mean value and standard deviation of the absolute depth error for the 15 typical scenes under the stereo image pair model and the motion estimation model
    Depth maps and its error maps of the two models with large error
    Depth maps and its error maps of the two models with small error
    Mean value and standard deviation of the distance errors from the predicted point cloud to the ground truth after ICP registration optimization
    Error distribution of point cloud(better result). (a) Stereo image pair model; (b) motion estimation model
    Error distribution of point cloud(poor result). (a) Stereo image pair model; (b) motion estimation model
    • Table 1. Used 7 different evaluation indexes

      View table

      Table 1. Used 7 different evaluation indexes

      Evaluating indexDefinition
      AbsE1DdDd'-d
      AbsRel1DdDd'-d /d'
      RMSE1DdDd'-d2
      RMSElog1DdDlogd'-logd2
      δt1DdDmaxd'd,dd'<1.25t×100%,t=1,2,3
    • Table 2. Comparison results of the stereo image pair model and the motion estimation model under the 7 indicators

      View table

      Table 2. Comparison results of the stereo image pair model and the motion estimation model under the 7 indicators

      SceneParameterAbsEAbsRelRMSElogRMSEδ1δ2δ3
      S-DNP-DNS-DNP-DNS-DNP-DNS-DNP-DNS-DNP-DNS-DNP-DNS-DNP-DN
      S1Mean92.05969.3960.0440.0330.0190.015130.047101.9680.9920.9960.9980.9990.9990.999
      Std20.11913.0090.0110.0080.0040.00330.70126.7280.0120.0070.0050.0030.0020.002
      S2Mean142.00485.4490.0690.0420.0310.018218.779134.2060.9390.9860.9860.9970.9981.000
      Std85.91050.5060.0330.0220.0170.010104.94656.0330.0860.0170.0210.0040.0050.001
      S3Mean76.17647.5320.1430.0620.0440.026172.549116.6780.8900.9700.9670.9870.9860.991
      Std47.52333.0260.1040.0360.0270.015134.62179.8580.1070.0750.0360.020.0180.012
      S4Mean81.86953.9930.0530.0370.0230.016118.61084.1160.9850.9910.9950.9990.9980.999
      Std27.41218.1370.0180.0130.0080.00546.79431.0980.0220.0120.0110.0030.0050.001
      S5Mean79.43453.4650.0480.0350.0210.015127.19687.6500.9860.9900.9950.9980.9990.999
      Std13.33215.1850.0070.0110.0030.00528.82431.4700.0120.0090.0050.0030.0020.001
      S6Mean73.65051.2230.0600.0440.0240.018146.474121.9220.9770.9810.9860.9880.9930.995
      Std32.01024.6410.0340.0260.0090.008102.21492.6440.0210.0180.0160.0130.0120.009
      S7Mean147.34777.6960.1190.0610.0400.023262.657153.3150.9410.9690.9600.9840.9680.990
      Std77.37539.0670.0990.0460.0240.014199.390110.3250.0730.0390.0620.0250.0520.017
      S8Mean61.50145.3110.0590.0390.0240.017112.96791.3780.9650.9870.9910.9940.9950.997
      Std29.07115.7680.0360.0160.0130.00752.99842.3830.0480.0190.0150.0120.0090.008
      S9Mean61.19857.9300.0360.0340.0160.01589.28882.8430.9910.9950.9990.9991.0001.000
      Std17.01313.0900.0110.0070.0040.00325.55619.6840.0130.0060.0030.0020.0010.000
      S10Mean98.55479.7010.0570.0460.0240.020148.852118.5700.9790.9880.9950.9990.9991.000
      Std28.06327.1180.0190.0160.0080.00739.46935.6260.0240.0140.0060.0020.0020.001
      S11Mean112.13794.9950.0530.0450.0220.020171.664149.1300.9840.9860.9970.9980.9991.000
      Std16.45621.6450.0080.0090.0030.00430.73431.3930.0110.0100.0060.0030.0020.001
      S12Mean60.52946.0350.0650.0440.0240.018111.50989.9390.9670.9810.9880.9930.9920.995
      Std67.46839.0700.1550.0570.0300.016102.02668.8380.0690.0500.0480.0310.0400.026
      S13Mean67.79151.2390.0430.0340.0190.015109.10189.0850.9890.9920.9960.9980.9980.999
      Std21.20014.2420.0130.0100.0050.00437.64934.5780.0110.0090.0070.0040.0060.003
      S14Mean138.87562.5540.0560.0260.0250.011161.11783.8490.9960.9990.9991.0001.0001.000
      Std49.42510.9580.0170.0040.0080.00247.46913.8560.0090.0020.0020.0000.0010.000
      S15Mean37.83331.3370.0440.0350.0170.01576.26958.7190.9850.9880.9920.9940.9960.998
      Std20.79716.6530.0330.0220.0090.00864.29738.3560.0260.0240.0210.0150.0130.007
    • Table 3. Mean value and standard deviation of the distance errors from the predicted point cloud to the ground truth under the 15 typical scenes

      View table

      Table 3. Mean value and standard deviation of the distance errors from the predicted point cloud to the ground truth under the 15 typical scenes

      Scene

      Mean-ICP

      P-DN

      Std-ICP

      P-DN

      Mean-ICP

      S-DN

      Std-ICP-

      S-DN

      S152.822.160.815.7
      S2110.230.1112.220.0
      S340.013.852.839.9
      S449.215.947.415.4
      S550.926.151.933.1
      S660.137.959.437.0
      S749.617.152.414.5
      S833.421.041.9223.5
      S940.49.853.713.1
      S10150.814.5154.221.0
      S11134.022.9128.826.1
      S1254.321.454.922.5
      S1347.014.344.6712.0
      S1454.912.355.712.3
      S1527.016.729.316.0
    Tools

    Get Citation

    Copy Citation Text

    Lihua Hu, Wenzhuang Yin, Siyuan Xing, Jifu Zhang, Qiulei Dong, Zhanyi Hu. 3D Reconstruction and Accuracy Evaluation of Ancient Chinese Architectural Patches Based on Depth Learning from Single Image[J]. Laser & Optoelectronics Progress, 2022, 59(14): 1415020

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Mar. 8, 2022

    Accepted: Mar. 29, 2022

    Published Online: Jul. 1, 2022

    The Author Email: Zhanyi Hu (huzy@nlpr.ia.ac.cn)

    DOI:10.3788/LOP202259.1415020

    Topics