Optics and Precision Engineering, Volume. 31, Issue 16, 2444(2023)

Review of multi-view stereo reconstruction methods based on deep learning

Huabiao YAN1... Fangqi XU1, Lü'er HUANG2,*, Cibo LIU1 and Chuxin LIN1 |Show fewer author(s)
Author Affiliations
  • 1School of Science, Jiangxi University of Science and Technology, Ganzhou34000, China
  • 2School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou341000, China
  • show less
    Figures & Tables(7)
    Overall structure of MVSNet
    Classification of MVS Network Improvement Methods Based on Supervised Learning
    Visualized performance comparisons of MVS methods on Tanks and Temples benchmark[106] (higher is better)
    • Table 1. Main characteristics and problems of MVS methods with different cost volume regularization strategies

      View table
      View in Article

      Table 1. Main characteristics and problems of MVS methods with different cost volume regularization strategies

      TypeCharacteristicProblemMethod
      End-to-endCNNCost volume regularization using 3D CNNUsually slow in training and reasoning, with large memory consumption18],[70],[57],[55],[71
      RNNCombining the accuracy of 3D CNN and the efficiency of RNN, greatly reducing memory consumptionReduced memory consumption but increased runtime16],[47],[72],[73],[41],[74
      Coarse-to-fineRefine by other methods after obtaining the initial depth mapNeed some prior knowledge11],[17],[66],[49],[22],[48],[34],[50],[30],[75],[45],[53],[64],[63],[43],[76
      Multi-stageCoarse-to-fineBuild the cost volume over the entire depth range with coarse resolution, and calculate the reduced sample range based on the coarse depth mapPrediction accuracy is highly dependent on the initial depth map. The cost volume characteristics of different stages are not fully considered60],[61],[27],[20],[62],[19],[24],[54],[51],[31],[67],[42],[68],[77],[78],[23],[79],[39],[28],[32],[65
    • Table 2. Summary of loss functions of MVS methods based on unsupervised learning

      View table
      View in Article

      Table 2. Summary of loss functions of MVS methods based on unsupervised learning

      MethodLoss function
      TypeNetworkTraining methodsLSSIMLsmoothLPCL1LfeatureViewImage gradientLDAOther
      UnsupervisedUnsupMVS89End-to-End
      MVS290End-to-End
      M3VSNet93End-to-EndNormal depth
      PatchMVS98End-to-EndLgeometric
      RC-MVS99End-to-EndLrender
      MS-MVS97End-to-End
      Self-supervisedJDACS94End-to-EndLSC
      Meta MVS91Multi-stage
      SelfsupCVP95Multi-stageLperception
      U-MVSNet96Multi-stageOptical flow
      KD-MVS105Multi-stage
    • Table 3. Dataset of MVS methods

      View table
      View in Article

      Table 3. Dataset of MVS methods

      DatasetSceneResolutionScaleNumber of scenesOnline benchmarkWebsite
      DTU9indoor1 600×1 200 pixels27 097124http://roboimagedata.compute.dtu.dk/
      Tanks and Temples106Intermediateoutdoor8-megapixel≈56008https://www.tanksandtemples.org/
      Advanced6
      ETH3D107high resolutionindoor and outdoor24-megapixel45413(train)https://www.eth3d.net/
      44312(test)
      low resolution0.4-megapixel4 7965(train)
      5 2125(test)
      BlendedMVS108outdoor1 536×2 048 pixels17 818106(train)https://github.com/YoYo000/BlendedMVS
      7(test)
      GigaMVS109outdoorgigapixel3 59913http://www.gigamvs.com/
    • Table 4. Quantitative results of different MVS methods on DTU datasets9 (lower is better)

      View table
      View in Article

      Table 4. Quantitative results of different MVS methods on DTU datasets9 (lower is better)

      TypeMethod

      Acc.

      /mm

      Comp.

      /mm

      Overall

      /mm

      Method

      Acc.

      /mm

      Comp.

      /mm

      Overall

      /mm

      SupervisedMVSNet110.3960.5270.462AACVP-MVSNet310.3570.3260.341
      R-MVSNet160.3830.4520.417NR2-Net730.3700.3320.351
      P-MVSNet660.4060.4340.420AA-RMVSNet410.3760.3390.357
      Point-MVSNet170.3420.4110.376EPP-MVSNet670.4130.2960.355
      MVSCRF180.3710.4260.398PatchmatchNet620.4270.2770.352
      CVP-MVSNet200.2960.4060.351DRI-MVSNet280.4320.3270.379
      CasMVSNet190.3250.3850.355IterMVS740.3730.3540.363
      UCSNet270.3380.3490.344SPGNet710.3200.3820.351
      BP-MVSNet700.3330.3200.327D-CasMVSNet420.3480.3500.349
      Fast-MVSNet480.3360.4030.370ASPPMVSNet390.3340.3600.347
      PVSNet610.3370.3150.326Effi-MVS780.3140.3340.324
      Vis-MVSNet220.3690.3610.365GBi-Net750.3150.2620.289
      VA-Point-MVSNet490.3590.3580.359TransMVSNet450.3210.2890.305
      PVA-MVSNet240.3790.3360.357RayMVSNet230.3410.3190.330
      Att-MVS830.3830.3290.356MVSTER530.3500.2760.313
      D2HC-RMVSNet470.3950.3780.386UniMVSNet680.3520.2780.315
      MVSNet++570.4070.3450.376NP-CVP-MVSNet770.3560.2750.315
      REDNet720.4560.3260.391ACINR-MVSNet630.3060.3640.335
      CIDER600.4170.4370.427ADR-MVSNet790.3540.3170.335
      LANet340.320.3490.335MSCVP-MVSNet650.3790.2780.328
      DDR-Net510.3390.3200.329FPSA-MVSNet320.3630.2830.323
      PA-MVSNet300.3130.4370.375ADIM-MVSNet430.3440.2980.321
      DDL-MVS880.4050.2670.336HSF-MVSNet350.3780.3530.365
      BH-RMVSNet840.3680.3030.335MVSFormer370.3270.2510.289
      CER-MVS1100.3590.3050.332CDSFNet460.3520.2800.316
      HighRes-MVSNet500.3540.3930.373MFNet640.3390.3040.321
      MVSTR540.3560.2950.326WT-MVSNet550.3090.2810.295
      UnsupervisedUNSUP-MVS890.8811.0730.977SurRF1030.3880.3900.389
      Meta MVS910.5940.7790.687CLD-MVS1110.3350.4300.383
      MVS2900.7600.5150.637Self-sup-CVP-MVSNet950.3080.4180.363
      M3VSNet930.6360.5310.583JDACS-MS940.3980.3180.358
      JDACS940.5710.5150.543U-MVSNet960.3540.3540.354
      PatchMVSNet980.5380.3650.451RC-MVSNet990.3960.2950.345
      MS-MVS970.3830.4150.399KD-MVS1050.3590.2950.327
    Tools

    Get Citation

    Copy Citation Text

    Huabiao YAN, Fangqi XU, Lü'er HUANG, Cibo LIU, Chuxin LIN. Review of multi-view stereo reconstruction methods based on deep learning[J]. Optics and Precision Engineering, 2023, 31(16): 2444

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Information Sciences

    Received: Nov. 14, 2022

    Accepted: --

    Published Online: Sep. 5, 2023

    The Author Email: HUANG Lü'er (9320080310@jxust.edu.cn)

    DOI:10.37188/OPE.20233116.2444

    Topics