Advanced Imaging, Volume. 2, Issue 2, 021001(2025)
Self-supervised PSF-informed deep learning enables real-time deconvolution for optical coherence tomography On the Cover
Fig. 1. Schematic of the real-time OCT deconvolution framework, encompassing both training and inference phases. The training phase involves collecting a diverse dataset and applying a subsampling strategy to generate denoised images, which are then utilized to train a denoising network as detailed in this paper. Subsequently, the sparse deconvolution is applied to the denoised images to create a set of enhanced images for supervision. The trained network is subsequently deployed in the inference phase for direct enhancement of OCT scans, facilitating real-time visualization.
Fig. 2. Architecture of the proposed lightweight DNN. (a) Overview of the DNN structure, highlighting four convolutional blocks followed by concatenations and an activation layer. The numbers indicate the corresponding number of channels. (b) Detailed composition of a convolutional block, consisting of two
Fig. 3. Methodology and results of PSF estimation. (a) Schematic of our home-built OCT system. (b) Workflow for determining the PSF of our custom OCT systems using gold nanoparticles, including imaging, averaging, selection, fitting, and modeling steps. (c) Comparative analysis of OCT images from orange and organoid samples before and after denoising with the proposed method. (d) Comparative assessment of axial and transverse PSFs derived from orange and organoid images, contrasting estimates from original and denoised samples with GT measurements.
Fig. 4. Comparative analysis of resolution enhancement by different deconvolution algorithms on gold nanoparticles. (a) Imaging characteristics of individual gold nanoparticles in OCT B-scans, demonstrating the effects of various deconvolution techniques. (b) Three-dimensional intensity distributions of a nanoparticle before and after processing with the proposed method. (c) Axial and transverse intensity profiles at the maximum intensity point of a nanoparticle, comparing original and deconvolved images. (d) Quantitative resolution comparison, measured as FWHM, for 30 gold nanoparticles processed with different methods, with the statistical representation of the interquartile range and outliers. LR, Lucy–Richardson deconvolution; SD, sparse deconvolution.
Fig. 5. Comparison of different algorithms for OCT images of orange samples. (a) Comparison between the original B-scan and its deconvolved counterpart using our proposed method. (b) Intensity profiles of the images processed with various deconvolution algorithms along the green lines indicated in (a). (c) Comparison of the original images and the results after processing with different deconvolution algorithms for the regions of interest outlined by dashed boxes in (a). LR, Lucy–Richardson deconvolution; SD, sparse deconvolution.
Fig. 6. Enhanced OCT images of ocular samples using different deconvolution methods. (a) Comparison of human eye images, with insets showing detailed regions. (b) and (c) Posterior human eye[33] and rabbit retina images before and after enhancement with our proposed method, highlighting improved clarity and resolution. LR, Lucy–Richardson deconvolution; SD, sparse deconvolution.
Fig. 7. Comparison of different algorithms for swine artery endoscopic scans. (a) and (c) Endoscopic scans of the swine artery on two systems[32] before and after enhancement using our method. (b) and (d) Comparisons of different algorithms corresponding to the respective regions marked on the left.
Fig. 8. Generalization evaluation of the proposed deconvolution method using previously unseen OCT images of optical tape and a human finger. (a), (d) Original and enhanced OCT images, with white dashed boxes highlighting regions of interest that are enlarged in (b), (e). (c), (f) Intensity profiles along the green lines in (a) and (d), demonstrating improved resolution and structural clarity.
Fig. 9. Inference time and model parameter sizes for different algorithms. (a) Comparison of inference time for different algorithms on images of varying sizes. (b) Model parameter sizes for the U-Net and the proposed lightweight network.
Fig. 10. The impact of parameters on the results at various stages of our workflow. (a) Original and enhanced images of orange slices using different denoising techniques. (b) Enhanced images with varying PSF parameters applied in the sparse deconvolution step. Each row or column shares the same standard deviation along the
|
|
Get Citation
Copy Citation Text
Weiyi Zhang, Haoran Zhang, Qi Lan, Chang Liu, Zheng Li, Chengfu Gu, Jianlong Yang, "Self-supervised PSF-informed deep learning enables real-time deconvolution for optical coherence tomography," Adv. Imaging 2, 021001 (2025)
Category: Research Article
Received: Dec. 24, 2024
Accepted: Feb. 11, 2025
Published Online: Mar. 19, 2025
The Author Email: Jianlong Yang (jyangoptics@gmail.com)