Acta Optica Sinica, Volume. 40, Issue 1, 0111021(2020)

Single-Image Refocusing Using Light Field Synthesis and Circle of Confusion Rendering

Qi Wang1,2,3 and Yutian Fu1,2、*
Author Affiliations
  • 1Key Laboratory of Infrared System Detection and Imaging, Chinese Academy of Sciences, Shanghai 200083, China
  • 2Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
  • 3University of Chinese Academy of Sciences, Beijing 100049, China
  • show less
    Figures & Tables(11)
    Framework of single image dynamic refocusing algorithm
    Light field digital refocusing. (a) Sensor image; (b) sub-aperture image; (c) refocused image
    Principle of CoC rendering by gathering method
    Deep network architecture. (a) Focused stack based method; (b) parallax based method
    Comparison of monocular depth estimation results. (a) Center image; (b) disparity method; (c) focal stack method; (d) method proposed by Godard et al.[16]; (e) normalized method proposed by Cheng et al.[17](SSIM:0.756); (f) normalized disparity method (SSIM:0.823); (g) normalized focal stack method (SSIM: 0.895); (h) method proposed by Jeon et al.[18]</s
    Quantitative and qualitative comparison of light field synthesis methods. (a) Method proposed by Kalantari et al.[10]; (b) method proposed by Srinivasan et al.[13]; (c) our method; (d) ground truth center image; (e)(f)(g) rendered center images obtained by methods proposed by Kalantari et al.[10] and Srinivasan et al.[<xref ref-type="bibr" rid
    Quantitative and qualitative comparison of rendering methods. (a) Center image; (b1)--(b4) refocused images; (c) depth estimation using focal stack method; (d1)--(d4) rendering results using focal stack method; (e) depth estimation using disparity method; (f1)--(f4) rendering results using disparity method; (g)(h)(i) results of occlusion detection, focus on foreground, and focus on background obtained by method proposed by Zhang et al.[5]; (j)(k)
    Comparison of rendering results on different datasets. (a) Ground truth center image; (b) depth estimation; (c) refocusing on close positions; (d) refocusing on far positions
    Comparison of rendering effects of real scenes with images captured by different cameras. (a) Original image; (b)--(e) refocused images at four depths rendered by our method;(f) depth map; (g)(h) images shot by dual cameras focused on two positions; (i)(j) images shot by Cannon focused on two positions
    • Table 1. Parameters for light field datasets

      View table

      Table 1. Parameters for light field datasets

      DatasetQuantityResolutionFormat
      Stanford72014×14×540×375LFR/mat/npy
      UCSD10014×14×540×372PNG
      Flower334314×14×540×372PNG
      EPFL11814×14×552×383LFR/mat/npy
    • Table 2. Quantitative analysis on rendering effects on light field datasets

      View table

      Table 2. Quantitative analysis on rendering effects on light field datasets

      DatasetSSIMPSNR
      Stanford0.89730.72
      UCSD0.91232.11
      Flower0.92332.89
      EPFL0.90131.03
    Tools

    Get Citation

    Copy Citation Text

    Qi Wang, Yutian Fu. Single-Image Refocusing Using Light Field Synthesis and Circle of Confusion Rendering[J]. Acta Optica Sinica, 2020, 40(1): 0111021

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Special Issue on Computational Optical Imaging

    Received: Aug. 5, 2019

    Accepted: Nov. 6, 2019

    Published Online: Jan. 6, 2020

    The Author Email: Fu Yutian (yutianfu@mail.sitp.ac.cn)

    DOI:10.3788/AOS202040.0111021

    Topics