Photonics Research, Volume. 12, Issue 3, 474(2024)
Deep learning-based optical aberration estimation enables offline digital adaptive optics and super-resolution imaging On the Cover
Fig. 1. (a)–(e) Network architecture of space-frequency encoding network. (a) Network architecture of the SFE-Net, (b) residual group, (c) double convolutional block, (d) downscale block, and (e) upscale block.
Fig. 2. (a)–(c) Network architecture of spatial feature transform-guided deep Fourier channel attention network (SFT-DFCAN). (a) Network architecture of the SFT-DFCAN, (b) spatial feature transform-guided Fourier channel attention block (FCAB), and (c) Fourier channel attention (FCA) layer.
Fig. 3. Schematic of the data augmentation and training process of SFE-Net. Scale bar, 2 μm (original image), 1 μm (cropped regions).
Fig. 4. Optical aberration estimation via SFE-Net. (a) Representative aberrated PSFs estimated by KernelGAN, IKC, MANet, and SFE-Net from WF images of CCPs, ER, and MTs. Four groups of datasets with escalating complexity of aberration were generated, corresponding to Zernike polynomials of orders 4–6, 4–8, 4–13, and 4–18. The top and bottom rows show the input WF images and GT PSF images for reference. Scale bar, 1 μm. (b) Statistical comparisons (
Fig. 5. Progression of training loss and validation PSNR of network model with/without the FFT layer and frequential branch during training process.
Fig. 6. (a)–(c) Blind deconvolution with the estimated PSF. Representative deconvolved images of (a) CCPs, (b) ER, and (c) MTs processed with the RL deconvolution algorithm using ideal Gaussian PSF and PSF estimated by KernelGAN, IKC, MANet, and SFE-Net. The aberrated WF images (bottom right in the first column), deconvolved images (top left in the first column), and GT PSF images are shown. (d) PSNR curves calculated between RL deconvolved images using GT PSF and estimated PSFs, with the deconvolution iteration ranging from 5 to 100 (
Fig. 7. Aberration-aware image super-resolution reconstruction with the estimated PSF. (a) Representative SR images reconstructed by DFCAN and SFT-DFCAN with PSFs obtained from KernelGAN, IKC, MANet, and SFE-Net. Low-resolution images and high-resolution GT images are provided for reference. The corresponding estimated PSF images are presented in the top right corner of each reconstructed SR image. Scale bar, 1 μm, and 0.5 μm (zoom-in regions). (b) Statistical comparison of PSNR values for the output SR images produced by DFCAN and SFT-DFCAN with PSFs estimated by KernelGAN, IKC, MANet, and SFE-Net (
Fig. 8. Digital adaptive optics and super-resolution for live-cell imaging. Time-lapse WF images, estimated PSFs by SFE-Net, and corresponding SR images generated by SFE-Net-facilitated SFT-DFCAN of (a) MTs, (b) CCPs, and (c) ER. During the imaging procedure, the defocus aberration is manually added on MTs (a) data, while a combination of defocus and coma aberrations and a combination of defocus and spherical aberrations are applied on CCPs (b) and ER (c) images, respectively. The PSFs estimated by SFE-Net, along with their corresponding profiles and FWHM values, are displayed in the top right corner of SR images. Scale bar, 1 μm [(a)–(c)], and 0.2 μm [zoom-in regions of (a)–(c)].
Get Citation
Copy Citation Text
Chang Qiao, Haoyu Chen, Run Wang, Tao Jiang, Yuwang Wang, Dong Li, "Deep learning-based optical aberration estimation enables offline digital adaptive optics and super-resolution imaging," Photonics Res. 12, 474 (2024)
Category: Imaging Systems, Microscopy, and Displays
Received: Sep. 27, 2023
Accepted: Dec. 20, 2023
Published Online: Feb. 29, 2024
The Author Email: Dong Li (lidong@ibp.ac.cn)
CSTR:32188.14.PRJ.506778