Optical Instruments, Volume. 45, Issue 5, 62(2023)

A dual-branch guided network for depth completion

Xiaofei QIN, Wenkai HU, Dongxian BAN, Hongyu GUO, and Jing YU
Author Affiliations
  • School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
  • show less
    References(30)

    [1] [1] JARITZ M, DE ETTE R, WIRBEL E, et al. Sparse dense data with CNNs: depth completion semantic segmentation[C]International Conference on 3D Vision (3DV). Verona: IEEE, 2018: 52 60.

    [3] [3] DU R F, TURNER E, DZITSIUK M, et al. DepthLab: realtime 3D interaction with depth maps f mobile augmented reality[C]Proceedings of the 33rd Annual ACM Symposium on User Interface Software Technology. ACM, 2020: 829 − 843.

    [6] [6] CHEN L, LIN H, LI S T. Depth image enhancement f Kinect using region growing bilateral filter[C]Proceedings of the 21st International Conference on Pattern Recognition. Tsukuba: IEEE, 2012: 3070 3073.

    [7] [7] LIU S J, LAI P L, TIAN D, et al. Joint trilateral filtering f depth map compression[C]Proceedings of SPIE 7744, Visual Communications Image Processing 2010. Huangshan: SPIE, 2010: 77440F.

    [8] [8] ALHWARIN F, FERREIN A, SCHOLL I. IR stereo Kinect: improving depth images by combining structured light with IR stereo[C]13th Pacific Rim International Conference on Artificial Intelligence. Gold Coast: Springer, 2014: 409 421.

    [9] [9] CHIU W W C, BLANKE U, FRITZ M. Improving the Kinect by crossmodal stereo[C]British Machine Vision Conference. Dundee: BMVC, 2011: 1 10.

    [10] CHEN K, LAI Y K, WU Y X, et al. Automatic semantic modeling of indoor scenes from low-quality RGB-D data using contextual information[J]. ACM Transactions on Graphics, 33, 208(2014).

    [11] [11] ZHANG Y D, FUNKHOUSER T. Deep depth completion of a single RGBD image[C]Proceedings of the 2018 IEEECVF Conference on Computer Vision Pattern Recognition. Salt Lake City: IEEE, 2018: 175 185.

    [12] [12] QIU J X, CUI Z P, ZHANG Y D, et al. DeepLiDAR: Deep surface nmal guided depth prediction f outdo scene from sparse LiDAR data single col image[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Long Beach: IEEE, 2019: 3308 3317.

    [13] [13] MA F C, CAVALHEIRO G V, KARAMAN S. Selfsupervised sparsetodense: selfsupervised depth completion from LiDAR monocular camera[C]2019 International Conference on Robotics Automation (ICRA). Montreal: IEEE, 2019: 3288 3295.

    [15] [15] CHENG X J, WANG P, YANG R G. Depth estimation via affinity learned with convolutional spatial propagation wk[C]Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich: Springer, 2018: 108 125.

    [16] [16] HUANG Y K, WU T H, LIU Y C, et al. Indo depth completion with boundary consistency selfattention[C]Proceedings of the IEEECVF International Conference on Computer Vision Wkshop. Seoul: IEEE, 2019: 1070 1078.

    [17] [17] SAJJAN S, MOE M, PAN M, et al. Clear grasp: 3D shape estimation of transparent objects f manipulation[C]2020 IEEE International Conference on Robotics Automation (ICRA). Paris: IEEE, 2020: 3634 3642.

    [18] [18] ZHU L Y, MOUSAVIAN A, XIANG Y, et al. RGBD local implicit function f depth completion of transparent objects[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Nashville: IEEE, 2021: 4647 4656.

    [19] [19] HU M, WANG S L, LI B, et al. PE: towards precise efficient image guided depth completion[C]2021 IEEE International Conference on Robotics Automation (ICRA). Xi''an: IEEE, 2021: 13656 13662.

    [20] [20] TANG Y J, CHEN J H, YANG Z G, et al. DepthGrasp: depth completion of transparent objects using selfattentive adversarial wk with spectral residual f grasping[C]2021 IEEERSJ International Conference on Intelligent Robots Systems (IROS). Prague: IEEE, 2021: 5710 5716.

    [21] [21] MIYATO T, KATAOKA T, KOYAMA M, et al. Spectral nmalization f generative adversarial wks[C]6th International Conference on Learning Representations. Vancouver: ICLR, 2018.

    [22] [22] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich: Springer, 2018: 3 19.

    [24] [24] PASZKE A, GROSS S, MASSA F, et al. PyTch: an imperative style, highperfmance deep learning library[C]Proceedings of the 33rd International Conference on Neural Infmation Processing Systems. Vancouver: NeurIPS, 2019: 721.

    [25] [25] KINGMA D P, BA J. Adam: a method f stochastic optimization[C]3rd International Conference on Learning Representations. San Diego: ICLR, 2014.

    [26] [26] SILBERMAN N, HOIEM D, KOHLI P, et al. Indo segmentation suppt inference from RGBD images[C]12th European Conference on Computer Vision. Flence: Springer, 2012: 746 760.

    [27] [27] HARRISON A, NEWMAN P. Image sparse laser fusion f dense scene reconstruction[C]7th International Conference on Field Service Robotics. Cambridge: Springer, 2010: 219 228.

    [28] [28] LIU J Y, GONG X J. Guided depth enhancement via anisotropic diffusion[C]14th PacificRim Conference on Advances in Multimedia Infmation Processing. Nanjing: Springer, 2013: 408 417.

    [29] [29] SENUSHKIN D, ROMANOV M, BELIKOV I, et al. Decoder modulation f indo depth completion[C]2021 IEEERSJ International Conference on Intelligent Robots Systems (IROS). Prague: IEEE, 2021: 2181 2188.

    [30] ALHASHIM I, WONKA P. High quality monocular depth estimation via transfer learning[J]. arXiv:, 11941, 2018(1812).

    Tools

    Get Citation

    Copy Citation Text

    Xiaofei QIN, Wenkai HU, Dongxian BAN, Hongyu GUO, Jing YU. A dual-branch guided network for depth completion[J]. Optical Instruments, 2023, 45(5): 62

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Dec. 17, 2022

    Accepted: --

    Published Online: Dec. 27, 2023

    The Author Email:

    DOI:10.3969/j.issn.1005-5630.2023.005.008

    Topics