Chinese Optics Letters, Volume. 20, Issue 4, 041101(2022)
Deep learning-based scattering removal of light field imaging Editors' Pick
Fig. 1. Overview of DeepSLFI. The light field imaging forward models can be built after the scattering imaging scene and the light field imaging system are determined. The simulation light field images serving as training samples can be generated with the forward models. Then, a neural network will be trained with the samples and utilized to remove the scattering of the light field image captured experimentally. Finally, the high-resolution and high-contrast 3D reconstruction can be obtained by 3D deconvolution with the deblurred light field image and the scattering-free forward model.
Fig. 2. Diagram of light field imaging forward model. Light emitted from voxels of object space
Fig. 3. Light field images captured experimentally of the same object (a) without scattering and (b) with scattering. (a1) and (b1) correspond to the parts of the dashed boxes in (a) and (b), respectively. (c) The intensity curves of pixels on the blue and green lines.
Fig. 8. Reconstruction results of the USAF target in the field of view. All of the images are scaled to [0,1]. (a) (i) Light field without scattering, (ii) with scattering, and (iii) deblurred by the network. (iv) The intensity curves of pixels corresponding to the lines in (i), (ii), and (iii). (v) The PSNRs and SSIMs of the LFs. (b) 3D reconstructions from perspective and orthogonal views (removed noise containing only several voxels that interferes with the observation), where the x and y directions are transverse directions, and the z direction is the depth/axial direction. The yellow dashed box shows the depth position of the object. (c) Slice images in the 3D reconstructions. (d) The intensity curves of voxels/pixels corresponding to the dashed lines in the slice images in (c), with (i) the curves of the manganese-purple dashed line in the x-y section, (ii) the curves of the green dashed line in the x-y section, (iii) the curves of the green dashed line in the x-z section, and also the manganese-purple dashed line in the y-z section. (iv) The PSNRs and SSIMs of the 3D reconstructions, where only the part consisting of areas in the yellow dashed box of each depth is selected for calculation. This is because the part outside is the edge of the field of view and has a low quality of reconstruction. It is not suitable for comparison of PSNRs and SSIMs affected by scattering.
Fig. 9. Results of reconstructions with different ways for two slits to be localized at different depths in object space. (a) The 3D reconstructions. The blue dashed boxes show the sizes and the positions of the slits. The “Depth” means the distance from the front plane of the depth of field. The median filtering is conducted at the edge of the lateral field of view to remove noise containing several voxels. (b) The PSNRs and SSIMs of the 3D reconstructions.
Get Citation
Copy Citation Text
Weihao Wang, Xing Zhao, Zhixiang Jiang, Ya Wen, "Deep learning-based scattering removal of light field imaging," Chin. Opt. Lett. 20, 041101 (2022)
Category: Imaging Systems and Image Processing
Received: Jan. 6, 2022
Accepted: Jan. 19, 2022
Posted: Jan. 20, 2022
Published Online: Feb. 17, 2022
The Author Email: Xing Zhao (zhaoxingtjnk@nankai.edu.cn)