Photonics Research, Volume. 12, Issue 3, 474(2024)

Deep learning-based optical aberration estimation enables offline digital adaptive optics and super-resolution imaging On the Cover

Chang Qiao1,2、†, Haoyu Chen3,4、†, Run Wang1、†, Tao Jiang3,4, Yuwang Wang5,6, and Dong Li3,4、*
Author Affiliations
  • 1Department of Automation, Tsinghua University, Beijing 100084, China
  • 2Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China
  • 3National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
  • 4College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
  • 5Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
  • 6e-mail: wang-yuwang@mail.tsinghua.edu.cn
  • show less
    Figures & Tables(8)
    (a)–(e) Network architecture of space-frequency encoding network. (a) Network architecture of the SFE-Net, (b) residual group, (c) double convolutional block, (d) downscale block, and (e) upscale block.
    (a)–(c) Network architecture of spatial feature transform-guided deep Fourier channel attention network (SFT-DFCAN). (a) Network architecture of the SFT-DFCAN, (b) spatial feature transform-guided Fourier channel attention block (FCAB), and (c) Fourier channel attention (FCA) layer.
    Schematic of the data augmentation and training process of SFE-Net. Scale bar, 2 μm (original image), 1 μm (cropped regions).
    Optical aberration estimation via SFE-Net. (a) Representative aberrated PSFs estimated by KernelGAN, IKC, MANet, and SFE-Net from WF images of CCPs, ER, and MTs. Four groups of datasets with escalating complexity of aberration were generated, corresponding to Zernike polynomials of orders 4–6, 4–8, 4–13, and 4–18. The top and bottom rows show the input WF images and GT PSF images for reference. Scale bar, 1 μm. (b) Statistical comparisons (n=30) of KernelGAN, IKC, MANet, and SFE-Net in terms of peak signal-to-noise ratio (PSNR) on different training and testing datasets. Center line, medians; limits, 75% and 25%; whiskers, the larger value between the largest data point and the 75th percentiles plus 1.5× the interquartile range (IQR), and the smaller value between the smallest data point and the 25th percentiles minus 1.5× the IQR; outliers, data points larger than the upper whisker or smaller than the lower whisker. The same notations for box plots are used in Figs. 6(e) and 7(b).
    Progression of training loss and validation PSNR of network model with/without the FFT layer and frequential branch during training process.
    (a)–(c) Blind deconvolution with the estimated PSF. Representative deconvolved images of (a) CCPs, (b) ER, and (c) MTs processed with the RL deconvolution algorithm using ideal Gaussian PSF and PSF estimated by KernelGAN, IKC, MANet, and SFE-Net. The aberrated WF images (bottom right in the first column), deconvolved images (top left in the first column), and GT PSF images are shown. (d) PSNR curves calculated between RL deconvolved images using GT PSF and estimated PSFs, with the deconvolution iteration ranging from 5 to 100 (n=120). (e) Statistical comparisons of PSNR for testing datasets of CCPs (left), ER (middle), and MTs (right), respectively (n=30). Scale bar, 1 μm [(a)–(c)], 0.25 μm [zoom-in regions of (a)–(c)].
    Aberration-aware image super-resolution reconstruction with the estimated PSF. (a) Representative SR images reconstructed by DFCAN and SFT-DFCAN with PSFs obtained from KernelGAN, IKC, MANet, and SFE-Net. Low-resolution images and high-resolution GT images are provided for reference. The corresponding estimated PSF images are presented in the top right corner of each reconstructed SR image. Scale bar, 1 μm, and 0.5 μm (zoom-in regions). (b) Statistical comparison of PSNR values for the output SR images produced by DFCAN and SFT-DFCAN with PSFs estimated by KernelGAN, IKC, MANet, and SFE-Net (n=30).
    Digital adaptive optics and super-resolution for live-cell imaging. Time-lapse WF images, estimated PSFs by SFE-Net, and corresponding SR images generated by SFE-Net-facilitated SFT-DFCAN of (a) MTs, (b) CCPs, and (c) ER. During the imaging procedure, the defocus aberration is manually added on MTs (a) data, while a combination of defocus and coma aberrations and a combination of defocus and spherical aberrations are applied on CCPs (b) and ER (c) images, respectively. The PSFs estimated by SFE-Net, along with their corresponding profiles and FWHM values, are displayed in the top right corner of SR images. Scale bar, 1 μm [(a)–(c)], and 0.2 μm [zoom-in regions of (a)–(c)].
    Tools

    Get Citation

    Copy Citation Text

    Chang Qiao, Haoyu Chen, Run Wang, Tao Jiang, Yuwang Wang, Dong Li. Deep learning-based optical aberration estimation enables offline digital adaptive optics and super-resolution imaging[J]. Photonics Research, 2024, 12(3): 474

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems, Microscopy, and Displays

    Received: Sep. 27, 2023

    Accepted: Dec. 20, 2023

    Published Online: Feb. 29, 2024

    The Author Email: Dong Li (lidong@ibp.ac.cn)

    DOI:10.1364/PRJ.506778

    Topics