Acta Optica Sinica, Volume. 44, Issue 14, 1415001(2024)

Three-Dimensional Point Cloud Registration Network Based on Deep Interactive Multi-Scale Receptive Field Feature Learning

Han Zhou1, Xuchu Wang1,2、*, and Yue Yuan1
Author Affiliations
  • 1College of Optoelectronic Engineering, Chongqing University, Chongqing 400040, China
  • 2Key Laboratory of Optoelectronic Technology & Systems (Chongqing University), Ministry of Education, Chongqing 400040, China
  • show less
    Figures & Tables(13)
    Framework diagram of DIM-RFNet
    Deep feature extraction module
    Transformer module diagram
    ORE structure diagram
    Visualization results on ModelNet40 and ModelLoNet datasets obtained by proposed model
    Visualization results on 3DMatch and 3DLoMatch datasets obtained by proposed model
    Visualization results on Odometry KITTI dataset obtained by proposed model
    • Table 1. Datasets used for experiments and their partition

      View table

      Table 1. Datasets used for experiments and their partition

      DatasetPartition
      Training setValidation setTesting set
      ModelNet40 and ModelLoNet5000 samples1200 samples1220 samples
      3DMatch and 3DLoMatch45 scenes9 scenes8 scenes
      OdometryKITTISequences 00-05Sequences 06-08Sequences 09-10
    • Table 2. Experimental parameters of the proposed method

      View table

      Table 2. Experimental parameters of the proposed method

      DatasetLNnXγΔposΔnegrcrnrorm
      ModelNet4053384640.11.40.0180.0600.0400.040
      3DMatch53256240.11.40.0360.1300.0370.040
      OdometryKITTI53512480.11.40.2100.7200.4300.280
    • Table 3. Test results on ModelNet40 and ModelLoNet datasets obtained by compared methods

      View table

      Table 3. Test results on ModelNet40 and ModelLoNet datasets obtained by compared methods

      DatasetRRE of ModelNet40RTE of ModelNet40CD of ModelNet40RRE of ModelLoNetRTE of ModelLoNetCD of ModelLoNet
      ICP21.2360.2860.1213028.0430.7910.1997
      Symmetric ICP17.5760.1350.1034024.7180.8570.1838
      FGR (2016)14.7260.1030.0852023.4160.7030.1741
      FMR (2020)13.6740.0850.0983022.5740.6850.1583
      PointNetLK (2019)12.7330.1210.0841021.7330.6320.1342
      DCP (2019)11.7950.1710.0117016.5010.2510.0268
      OMNet (2021)2.9470.0430.001508.9470.1330.0095
      RPMNet (2021)1.7120.0180.008507.3420.1240.0050
      Predator (2021)1.7390.0180.001205.2350.1320.0083
      Regtr (2022)1.4730.0140.000783.9300.0870.0037
      DIM-RFNet1.5420.0140.000773.9750.0850.0037
    • Table 4. Experimental results on 3DMatch and 3DLoMatch datasets obtained by compared methods

      View table

      Table 4. Experimental results on 3DMatch and 3DLoMatch datasets obtained by compared methods

      Dataset3DMatch3DLoMatch
      Number of samples500025001000500250500025001000500250
      FMR /%PerfectMatch (2019)95.094.392.990.182.963.661.753.645.234.2
      FCGF (2019)97.497.397.096.796.676.675.474.271.767.3
      D3Feat (2020)95.695.494.594.193.167.366.767.066.766.5
      SpinNet (2021)97.697.296.895.594.375.374.972.570.063.6
      Predator (2021)96.696.696.596.396.578.677.476.375.775.3
      CofiNet (2021)98.198.398.198.298.373.173.575.575.369.9
      Regtr (2022)97.897.496.996.195.674.374.474.273.872.9
      GeoTransformer (2023)97.997.997.997.997.688.388.688.888.688.3
      RoReg (2023)98.297.998.297.897.282.182.181.781.680.2
      DIM-RFNet97.797.697.697.697.588.488.688.988.688.4
      IR /%PerfectMatch (2019)36.032.526.421.516.411.410.18.06.44.8
      FCGF (2019)56.854.148.742.534.121.420.017.214.811.6
      D3Feat (2020)39.038.840.441.541.813.213.114.014.615.0
      SpinNet (2021)47.544.739.433.927.620.519.016.313.811.1
      Predator (2021)58.058.457.154.149.326.728.128.327.525.8
      CofiNet (2021)49.851.251.952.252.224.425.926.726.826.9
      Regtr (2022)57.355.253.852.751.127.627.327.126.625.4
      GeoTransformer (2023)71.975.276.082.285.143.545.346.252.957.7
      RoReg (2023)81.680.275.174.175.239.639.634.031.934.5
      DIM-RFNet71.875.276.082.285.143.745.346.152.957.8
      RR /%PerfectMatch (2019)78.476.271.467.650.833.029.023.317.011.0
      FCGF (2019)85.184.783.381.671.440.141.738.235.426.8
      D3Feat (2020)81.684.583.482.477.937.242.746.943.839.1
      SpinNet (2021)88.686.685.583.570.259.854.948.339.826.8
      Predator (2021)89.089.990.688.586.659.861.262.460.858.1
      CofiNet (2021)89.388.988.487.487.067.566.264.263.161.0
      Regtr (2022)92.091.289.790.690.464.864.464.262.359.7
      GeoTransformer (2023)92.091.891.891.491.275.074.874.274.173.5
      RoReg (2023)92.993.292.793.391.270.371.269.567.964.3
      DIM-RFNet92.091.791.691.491.274.974.974.374.273.6
    • Table 5. Experimental results on OdometryKITTI datasets

      View table

      Table 5. Experimental results on OdometryKITTI datasets

      MethodRTE /cmRRE /(°)RR /%
      3DFeat-Net (2019)25.90.5796.0
      FCGF (2019)9.50.3096.6
      D3Feat (2020)7.20.3099.8
      Predator (2021)6.80.2799.8
      GLORN (2022)6.20.2799.8
      GeoTransformer (2023)7.40.2799.8
      DIM-RFNet6.70.2599.8
    • Table 6. Ablation experimental results on ModelLoNet and ModelNet40 datasets

      View table

      Table 6. Ablation experimental results on ModelLoNet and ModelNet40 datasets

      ENSFDNSFORE and ORDRRE of ModelNet40RTE of ModelNet40CD of ModelNet40RRE of ModelLoNetRTE of ModelLoNetCD of ModelLoNet
      ×××12.7450.3120.124321.4160.7520.1351
      ××11.7620.1630.011216.5230.2530.0269
      ×8.9930.1430.009913.4750.2010.0207
      ××1.7380.0190.00857.4080.1520.0106
      ×1.7230.0150.00817.3510.1480.0102
      1.5420.0140.00084.9750.1150.0079
    Tools

    Get Citation

    Copy Citation Text

    Han Zhou, Xuchu Wang, Yue Yuan. Three-Dimensional Point Cloud Registration Network Based on Deep Interactive Multi-Scale Receptive Field Feature Learning[J]. Acta Optica Sinica, 2024, 44(14): 1415001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Jan. 19, 2024

    Accepted: Apr. 15, 2024

    Published Online: Jul. 4, 2024

    The Author Email: Xuchu Wang (xcwang@cqu.edu.cn)

    DOI:10.3788/AOS240529

    CSTR:32393.14.AOS240529

    Topics