Abstract
In digital holographic microscopy, the quantitative phase image suffers from phase aberrations and coherent noises. To solve these problems, two independent steps are applied sequentially in the reconstruction procedure to compensate for the phase aberrations and denoising. Here we demonstrate for the first time, to the best of our knowledge, that the reconstruction process can be simplified by replacing the two step methods with a deep learning-based algorithm. A convolutional neural network is trained simultaneously for phase aberration correction and denoising from an only wrapped phase map. In order to train the network, a database consists of massive wrapped phase maps as input, and noise-free sample phase maps as labels are constructed. The generated wrapped phase maps include a variety of phase aberrations and faithful coherent noises that are reconstructed from a practical apparatus. The trained network is applied to correct phase aberrations and denoise of both simulated and experimental data for the quantitative phase image. It exhibits excellent performance with output comparable to that reconstructed from the double exposure method for phase aberration correction followed with block-matching and 3D filtering for denoising, while outperforming other conventional two step methods.
© 2024 Optica Publishing Group. All rights, including for text and data mining (TDM), Artificial Intelligence (AI) training, and similar technologies, are reserved.
1. INTRODUCTION
Digital holographic microscopy (DHM) is a powerful quantitative phase imaging technique [1,2]. Due to its noncontact, label-free and full field nature, it has found various applications: such as engineered surface topography measurement [3,4], biological cell imaging, [5,6], and so on [7]. With the development of optical setups and advancement of reconstruction algorithms, DHM is ongoing, and techniques that exceed current spatial and temporal resolution, achieving high phase sensitivity and high throughput imaging capabilities, have emerged [8–10]. The advancement of DHM has in turn boosted its application in high fidelity quantitative phase imaging for both biomedicine and industrial metrology. In these applications, the behaviors and morphologic features of a cell or a neuron in different biological processes can be monitored [10], or topography of a micro-device during its precision manufacturing can be inspected [9].
In DHM, an objective is used to magnify the measured sample. The introduction of the objective would introduce a spherical phase curvature in the object beam. In contrast, usually the reference beam is set as a plane beam. The wave front mismatch between the two beams will form the main part of the phase aberration in the measured phase map [11,12]. In DHM, the optical path difference (OPD) induced by a specimen is recorded in the form of a hologram. However, during the recording, the unwanted phase terms, i.e., phase aberrations, are also recorded simultaneously, besides the desired phase map associated with the specimen. The phase aberrations generally include a linear phase difference introduced by the off-axis interferometric geometry, the spherical phase curvature caused by the microscopic objective, and other high-order phase terms related to the optical lens [13]. These phase aberrations will superimpose on the sample’s phase to block the identification of the sample; thus, it needs to be compensated to get the exact sample phase image. On top of this, the phase image of the specimen also suffered from the coherent noise [14]. Due to the highly coherent laser used in the setup, parasitic interference fringes resulting from multiple reflections from the coverslip and diffraction patterns from the dust or scratches on the surfaces of the lens are introduced into the hologram, thus resulting in phase noise in the reconstructed image. Finally, in the reconstruction procedure, both the phase aberrations and the coherent noise were superimposed on the sample phase map, thus seriously degrading the phase fidelity and hampering its application of DHM in phase imaging. An obvious advantage of DHM is its post-processing capability. With this, the phase aberrations and coherent noise in the phase image could be processed numerically. To achieve accurate phase imaging, a general reconstruction procedure containing multiple steps is shown in Fig. 1(a). The procedure includes two independent phase aberration correction and phase denoising processes, besides the basic wrapped phase extraction and phase unwrapping operations.
In terms of phase aberration correction, various numerical methods have been suggested, and they can be classified into two groups: traditional numerical compensation [15–26] and learning-based methods [13,27–32]. With the traditional numerical compensation methods, the phase curvature can be estimated by the least-square fitting method with either polynomial functions [16–18] or Zernike polynomials [19]. Phase aberrations that contain only linear and spherical phase terms can be compensated with the principal component analysis (PCA) method [20], spectral analysis method [12], or geometric transformation method [21,22]. The phase aberrations can also be evaluated by nonlinear optimization, but with a heavy computation burden [23–25]. Recently, a phase variable splitting framework based on alternating direction sparse optimization for aberration correction was proposed [9,11,26]. Faithful phase reconstruction is demonstrated both in DHM [11] and in synthetic-aperture phase microscopy [26]. The deep learning-based methods have emerged as a new kind of technique, demonstrating outstanding performance in aberration correction. With a convolutional neural network (CNN) Unet, the background in the phase map can be segmented, and the phase aberration is evaluated followed with a polynomial fitting operation [27]. The aberration-free phase image can also be output directly with a trained Unet [28]. The phase aberration can be estimated by determining the coefficients of its Zernike polynomials by means of linear regression with a ResNet-50 model [13] or a self-supervised sparse constraint network (SSCNet) [29], and the reconstruction accuracy could be further improved by implementing an additional numerical fitting procedure [30]. Instead of processing the phase image directly, a sample-free hologram is generated with a trained two-stage generative adversarial network (GAN) from a masked hologram where the sample area is labeled in advance [31], or directly from a sample hologram [32].
On the other hand, coherent noise presented in the phase image can be reduced either by digital image processing methods [3,14,33] or learning-based methods [34–37]. The median filtering may be a basic operation that suppresses the noise to some extent, but the finer details of the sample may also be smoothed. The low-pass filtering simply assumes the noise is in the high-frequency region in the spectrum domain, and a Butterworth digital filter or a Gaussian filter can be designed to filter out the high-frequency components [3]. Block-matching and 3D filtering (BM3D) is a state-of-the-art technique that combines non-local filtering and transform domain filtering and demonstrates impressive denoising performance among the traditional digital methods [14,33]. Deep learning-based methods can also be used to suppress the noise in the phase image. The speckle noise in a phase map of a deformed surface for stress analysis can be suppressed with a retrained DnCNN model, which is originally designed for Gaussian noise removal in the natural image [34]. The same kind of noise can also be reduced with a GAN model with a relative smaller dataset for training [35]. Without the noise-free image, a self-supervised network can be trained with a dataset containing only a noisy image for denoising an image [36]. Better performance is achieved with a DnCNN model incorporating the attention mechanism and residual connection which are trained with numerically simulated datasets [37]. However, in these learning-based denoising methods, to generate the training data, the noise in the image is simply considered as speckle noise [34,35], Gaussian noise [36] or Perlin noise [37], while the coherent noise in the phase image is so complicated that it cannot be characterized by any simple model [14]. As a result, the network may be effective in dealing with the same kind noise that is included in the training dataset, while other kinds of noises remain in the final phase map [37].
Currently, the problems of phase aberration correction (PAC) and denoising (DN) are treated as two independent tasks that are executed sequentially in the regime of phase imaging with DHM, for example, the execution of PAC in advance followed by DN, as shown in Fig. 1(a). However, noises in the phase image seriously affect the efficiency of the PAC procedure, especially when the least-square based fitting method is adopted [16–19]. Alternatively, DN can be applied before PAC. Aberrations presented in the phase image make it a tricky task to denoise. Inadequate filtering results in residual noise in the phase image, while excessive filtering leads to over-smoothing of the sample. Here, for the first time to the best of our knowledge, we present a one-step method that achieves PAC and DN simultaneously by using deep learning. A simulated dataset with a wrapped phase image as network input is synthesized to train a CNN. With the trained CNN, as shown in Fig. 1(b), an image-to-image transformation from an aberrated, noisy wrapped phase image to aberration-free , noise reduced phase image is directly produced. With this one-step method, not only is the reconstruction process in phase imaging simplified, but also the performance of the trained CNN is proved to be state-of-the-art in PAC and DN.
2. METHODS
A. Aberrated and Noisy Phase Data
In DHM, a hologram is formed by coherent addition of a complex object wave and a reference wave . In the image sensor plane, its intensity can be expressed as
Here the first two terms are the direct components, the third term contains the complex object wave, the fourth term contains its conjugate, and the fifth term contains the noises which include shot noise, speckle noise, and parasitic fringes, etc. [14].
In an off-axis hologram, as carrier frequency is presented, the complex object wave can be retrieved with the spectrum filtering method [4], and it is denoted as . In the reconstruction procedure, a wrapped phase image is obtained subsequently:
To remove the modulo in , a phase unwrapping algorithm is applied to get the unwrapped phase image . However, in addition to the desired phase image of the specimen, it contains both phase aberrations and phase noises . Hence, the unwrapped phase image is expressed as
Conventionally, to get the accurate phase image two independent steps are used to remove the phase aberrations and phase noises. Conversely, in the proposed method, a wrapped phase image is put into the CNN, and the desired phase image is output directly.
To train the CNN, a large amount of wrapped phase images is generated according to Eq. (3). The aberration phase terms could be modeled with a parabolic function [12], standard polynomials [18], or Zernike polynomials [19]. Here, for variousness of the phase maps and simplicity in generating training data, that include linear phase tilt, quadratic phase curvature, and high-order aberrations, are generated with Zernike polynomials:
Here is the th-order Zernike polynomial, and is the Zernike coefficient. The 2nd and 3rd Zernike terms represent linear phase delay along , directions, respectively. The 4th term denotes defocus. These tree terms constitute the main aberration in . Thus, to generate diverse fringe patterns, the 2nd and 3rd Zernike’s coefficients are set in the range of [1,120], and the 4th term is in the range of [, 25], while other terms are [, 0.5].
Secondly, phase noise is acquired through experiments. In previous deep learning-based denoising methods, phase noise is added by simulating Gaussian noise or speckle noise [34–36]. Nevertheless, these regular types of noises cannot represent the actual sophisticated coherent noises in experimental conditions, while the performance of trained CNNs depends on the noise type in the training set. Here, to enhance its denoising capability for actual phase noises, a set of phase noise images was collected by using the double exposure method [38]. The double exposure method is a reliable method capable of eliminating both low-order and high-order phase aberrations. However, it requires two holograms: one with a sample and the other without a sample. In an off-axis digital holographic microscope, a blank slide is placed at the stage, and a hologram is recorded. A second hologram is recorded by moving the slide a little. With the spectrum filtering method, a phase image containing only noise is obtained from these two holograms. During the experiments, we shifted different objectives and selected different slides. We also slightly adjusted the illumination beam angle and the interference angle to produce various phase noises. Totally, 50 high-resolution phase images containing only coherent noises were reconstructed with the double exposure method, and they were used for generating .
Finally, the sample phase image is obtained through both the simulation and experiments. A part of the samples that has binary patterns is generated with the simulation. A set of high-resolution target images containing binary patterns, such as Ronchi gratings with different periods and different ruling directions and an NBS-1963A image, are prepared. The simulated sample phase maps are then generated from these images. A region of pixels is randomly cropped from one of the high-resolution images. Then it is multiplied with a random value in the range of (0, ] to simulate the height of the sample and serve as a sample phase map. The other part of the samples is prepared experimentally. We measured a set of blood smears and PMMA beads in suspension oil. These sample phase maps are first reconstructed with the double exposure methods [38] from which the phase aberrations were compensated. To remove the coherent noises in the images, the BM3D [33] which is a state-of-the-art denoising method, was applied. The filtered images are considered as the accurate (ground truth) sample phase images. Similarly, a pixel region is randomly cropped from a filtered image to generate training data.
By adding an aberration term and noise phase term to the sample phase map, the noisy and aberrated phase maps are obtained. A modulo operation is then applied to get the wrapped phase map as the input data for training the CNN. This data generation process is schematically shown in Fig. 2. In addition, in order to further increase its adaptation of the CNN to practical data, 300 wrapped phase maps reconstructed directly from experiments were also added into the dataset. Totally, a dataset containing 30000 data pairs (wrapped phase image and noise-free wrapped phase image ) is generated. It was divided into a training dataset and a validation dataset in the ratio of 8:2.
B. Network Training
The CNN used in this work is PACUnet3+ which has been proposed recently [32]. PACUnet3+ derives from the original Unet3+ network and inherits its structure. The network consists of two parts: encoders and decoders, where a full-scale skip connection is adopted to make full use of multiple-scale feature maps to improve the training accuracy [39]. Instead of a simple convolution layer as used in Unet3+, in each layer of both encoders and decoders, a residual connection block (ResBlock) [40] is adopted to prevent network degradation and reduce the loss of features information. In the meantime, the efficient channel attention (ECA) mechanism [41] is implemented in the ResBlock to enhance channel characteristics. In previous work, the PACUnet3+ is trained to predict a sample-free hologram from a sample hologram. The phase aberration is then corrected such as that in the double exposure method. However, the input hologram should have the same size as that in the training. Resizing an off-axis hologram having a bigger size than that in training would induce an aliasing effect such that the network may fail to work.
Here a wrapped phase image reconstructed from an off-axis hologram is put into the network, and the output provides the sample phase image directly. In the training, 24000 wrapped phase images are used as input, while the corresponding sample phase images are used as training labels. The other parts of data pairs are used for validation. The root mean square error (RMSE) is one of the most used metrics to calculate the difference between two images, while the structural similarity index measure (SSIM) is another metric evaluating the similarity between two images. During the training, we tried and found a hybrid lost function that can be continuously minimized with the Adam optimizer:
The initial learning rate was , and it was reduced to one-half of its previous value every 25 epochs. A batch size of 8 that was set due to the network has a large amount of parameters to be updated and the limitation of the training platform. The network was trained with 110 epochs in a server with core Xeon Platinum 8260 CPU and 84 G of ram, using NVIDIA GeForce RTX 3090. The whole training time lasted for 150 h, and it takes about 0.75 s for the trained network to output an aberration- and noise-free phase map from the input. For comparison, Unet3+ was also trained with the same dataset. Figure 3 presents the training loss and validation loss curves for both networks. After the same training epochs, PACUnet3+ always has the smaller training loss values than Unet3+. In addition, the validation curve of Unet3+ fluctuated seriously. From these we know that PACUnet3+ would have a better performance than its counterpart Unet3+.
3. NUMERICAL RESULTS
After training, we first compared the performance of PACUnet3+ against Unet3+. In order to evaluate the accuracy of the network output, the point-to-point phase error map is calculated by subtracting the simulated ground truth sample phase map from the output. The RMSE and the peak-to-valley (PV) are two quality metrics to analyze the performance of PAC and DN for the network. In addition, to further evaluate the DN capability of the network, the peak signal-to-noise ratio (PSNR) of the output image is calculated where the ground truth sample phase map is used as the reference. We generated 100 new image pairs, i.e., wrapped phase images containing the aberrations and noise-free wrapped sample phase images, according to section 2.1. The 100 aberrated and noisy wrapped phase images were used as input to the trained networks. Subsequently, two groups of 100 phase error maps corresponding to two networks were calculated. The statistical results of the RMSE and PV of these two groups of error maps are shown in Figs. 4(a) and 4(b), respectively. For PACUnet3+, the average RMSE value is 0.0654 rad, and the average PV value is 0.967 rad, while they are 0.1359 rad and 1.5062 rad with Unet3+. These values indicated that PACUnet3+ has a better performance than that of the Unet3+ network. In addition, the distribution of the PSNR for two groups of outputs are presented in Fig. 4(c). The average PSNR value is 30.623 dB for the PACUnet3+, while it is 27.108 dB for Unet3+; the average PSNR for the original maps is 16.712 dB, which further indicated that both networks could filter the noise efficiently, and the trained PACUnet3+ has a better denoising capability.
To intuitively demonstrate the reconstruction results, two different samples are shown in Fig. 5. For both examples, Fig. 5(a) shows the wrapped phase images, where phase aberrations and phase noise are presented. Figure 5(b) shows two sample phase maps which are aberration-free and noise-free. The outputs predicted by the trained PACUnet3+ are shown in Fig. 5(c). As a comparison, Fig. 5(d) presents the results output by Unet3+. The profiles along the lines marked in Figs. 5(b)–5(d) are also plotted in Fig. 5(e) to compare the reconstruction accuracy.
For comparison, these two samples are also constructed with three other conventional methods. First, the phase aberrations presented in the wrapped phase images that are shown in Fig. 5(a) are compensated with the PCA, Zernike polynomial fitting (ZPF), and traditional 2D polynomial fitting (PF) methods. Subsequently, the three aberration corrected maps are filtered with BM3D to reduce the phase noise, and the final results are showcased in Fig. 6. Figure 6(a) presents the results reconstructed from the PCA and BM3D methods. Due to the fact that the PCA cannot deal with high-order phase aberrations, the results in Fig. 6(a) have obvious errors. With the other two fitting methods, the phase aberrations have not been fully compensated, as shown in Figs. 6(b) and 6(c). A possible reason is the phase noise has an adverse effect on numerical fitting. Figure 6(d) also plots the profiles along the same lines marked in Figs. 6(a)–6(c). The evaluation indices of the reconstructed results are also calculated and presented in Table 1. From the results presented in Figs. 5 and 6 and Table 1, we know that the results obtained from PACUnet3+ have fewer reconstruction errors and higher SNRs than Unet3+ and the other three conventional two-step methods.
4. EXPERIMENTAL RESULTS
To experimentally demonstrate the capability of the network, we tested two different samples: a USAF 1951 resolution target and a blood smear. The off-axis holograms were recorded in a transmissive digital holographic microscope that are based on a Mach–Zehnder interferometer; the detailed description of the setup can be found in Ref. [32]. In the setup, a tube lens is placed behind the microscopic objective to compensate for most of the phase aberrations, which is a general means for most of the digital holographic microscopes [27,28]. For each of the samples, the off-axis hologram is first Fourier transformed, and its object diffraction order is filtered. Afterwards, both wrapped phase maps are reconstructed and presented in Fig. 7(a). For comparison, these two samples are also constructed with the double exposure method to compensate for the phase aberrations and denoised with the BM3D method to suppress the coherent noise. These two phase maps are considered to be aberration-free and noise-free. They are presented in Fig. 7(b), and they are used as the references for comparison. Though the double exposure methods can produce an aberration-free phase image, it needs to record two holograms: a sample hologram and a sample-free hologram. However, the sample-free hologram is not always available. In contrast, the suggested method only needs a sample hologram to retrieve a wrapped phase map as the input of the network. The results predicted by PACUnet3+ and Unet3+ are shown in Figs. 7(c) and 7(d), respectively. For comparison, three indices of the results are calculated, and they are listed in Table 2. For the first example, the phase error of the result obtained from PACUnet3+ has a RMSE of 0.1044 rad and a PV of 2.3943 rad. As a contrast, they are 0.1467 and 2.8651 rad for the result obtained from Unet3+. In addition, the result of PACUnet3+ has a higher PSNR value of 29.7463 dB than that of 25.154 dB obtained from Unet3+. For the second sample, similar results are obtained, and they are listed in the figure and Table 2. It should be noted that the phase maps reconstructed from the double exposure method shown in Fig. 7(b) have obvious phase ripples or phase fluctuations in their backgrounds. Such phenomena may be induced by the parasitic fringes or serious speckle noises recorded in the hologram, which could not be fully filtered by any of a the conventional denoising methods. In contrast, the phase maps predicted by deep learning-based methods have relative smoother backgrounds. It should be noted that the processed phase images here have a resolution of pixels, which is much bigger than that in training. To process these phase maps, they are resized to a resolution of and then fed into the trained network. The output images from the network are enlarged accordingly. This is an obvious advantage of this method, as different sizes of the wrapped phase image due to different camera sensors can be processed at once. Conversely, in other deep learning methods with a hologram as its input, downsampling a larger hologram to the same size as that in the training generally leads to aliasing and thus results in failure of the methods [31,32].
We also reconstructed the sample with three other conventional methods where two independent steps are applied for phase aberration compensation and denoising. The reconstructed results are presented in Fig. 8, and the calculated evaluation indices are listed in Table 2. For both samples, none of the three methods, , , or , can fully compensate for the phase aberrations and remove phase noises. It should be noted that in the last two methods, before phase fitting, a phase unwrapping procedure is applied to get the unwrapped phase map; the excessive coherent noise may induce phase errors in this step and thus affects the accuracy of numerical fitting. For both samples, phase profiles along the lines in different reconstructed results are also plotted in Fig. 8(d). It is obvious that phase curves obtained from conventional methods exhibit severe deviations when compared with the reference phase curve. Among the results presented in Figs. 7 and 8, the phase curve of PACUnet3+ matched best to the reference. The presented results indicate that for the wrapped phase maps obtained from experiments, the trained PACUnet3+ is less affected by the complicated phase aberrations and coherent noises and capable of yielding a sample phase image with the highest accuracy.
5. CONCLUSION
In summary, a one-step state-of-the-art deep learning-based algorithm can be trained for PAC and DN. For achieving this, a faithful database is constructed for training according to the wrapped phase map generation model. In the database, the phase aberration containing both low- and high-order aberrations are simulated with Zernike polynomials, while phase noises are from those recorded in real measurement setups. Thus, the trained CNN has a good adaptability to practical aberrated and noisy phase maps. With the trained CNN, an aberration-free and denoised phase map is yielded directly from a wrapped phase map. Both the simulation and experimental results show that the proposed method can output a sample phase map with lower RMSE and PV values and a higher PSNR value when compared with the Unet3+ network and the other three conventional two-step methods. Except for its high-accuracy phase aberration compensation and phase noise removal capability, the proposed method could be an efficient way to optimize the reconstruction pipeline for quantitative phase imaging in DHM due to its two step in one nature.
Funding
Key scientific research program of Education Department in Shaanxi Province of China (22JY027); Natural Science Basic Research Program of Shaanxi Province (2023JCYB-513); Key special project of “two chains integration photon integration and manufacturing” in Shaanxi Province (2021LLRH-03).
Acknowledgment
R. Guo is grateful for the support from Xi’an Technological University.
Disclosures
The authors declare no conflicts of interest.
Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
REFERENCES
1. B. Javidi, A. Carnicer, A. Anand, et al., “Roadmap on digital holography,” Opt. Express 29, 35078–35118 (2021). [CrossRef]
2. V. Balasubramani, M. Kujawi?ska, C. Allier, et al., “Roadmap on digital holography-based quantitative phase imaging,” J. Imaging 7, 252 (2021). [CrossRef]
3. M. Matrecano, P. Memmolo, L. Miccio, et al., “Improving holographic reconstruction by automatic Butterworth filtering for microelectromechanical systems characterization,” Appl. Opt. 54, 3428–3432 (2015). [CrossRef]
4. R. Guo, F. Wang, X. Hu, et al., “Off-axis low coherence digital holographic interferometry for quantitative phase imaging with an LED,” J. Opt. 19, 115702 (2017). [CrossRef]
5. Y. Liu and S. Uttam, “Perspective on quantitative phase imaging to improve precision cancer medicine,” J. Biomed. Opt. 29, S22705 (2024). [CrossRef]
6. R. Guo, I. Barnea, and N. T. Shaked, “Limited-angle tomographic phase microscopy utilizing confocal scanning fluorescence microscopy,” Biomed. Opt. Express 12, 1869–1881 (2021). [CrossRef]
7. V. Micó, J. Zheng, J. Garcia, et al., “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11, 135–214 (2019). [CrossRef]
8. Y. K. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12, 578–589 (2018). [CrossRef]
9. Z. Z. Huang and L. C. Cao, “Quantitative phase imaging based on holography: trends and new perspectives,” Light Sci. Appl. 13, 145 (2024). [CrossRef]
10. T. L. Nguyen, S. Pradeep, R. L. Judson-Torres, et al., “Quantitative phase imaging: recent advances and expanding potential in biomedicine,” ACS Nano 16, 11516–11544 (2022). [CrossRef]
11. Z. Z. Huang and L. C. Cao, “Phase aberration separation for holographic microscopy by alternating direction sparse optimization,” Opt. Express 31, 12520–12533 (2023). [CrossRef]
12. J. Min, B. Yao, S. Ketelhut, et al., “Simple and fast spectral domain algorithm for quantitative phase imaging of living cells with digital holographic microscopy,” Opt. Lett. 42, 227–230 (2017). [CrossRef]
13. W. Xiao, L. Xin, R. Cao, et al., “Sensing morphogenesis of bone cells under microfluidic shear stress by holographic microscopy and automatic aberration compensation with deep learning,” Lab Chip 21, 1385–1394 (2021). [CrossRef]
14. V. Bianco, P. Memmolo, M. Leo, et al., “Strategies for reducing speckle noise in digital holography,” Light Sci. Appl. 7, 48–58 (2018). [CrossRef]
15. W. Lyu and Y. Shi, “Efficient phase aberration compensation for digital holographic microscopy based on aberration-oriented phase unwrapping,” Opt. Commun. 554, 130212 (2024). [CrossRef]
16. T. Nguyen, G. Nehmetallah, C. Raub, et al., “Accurate quantitative phase digital holographic microscopy with single- and multiple-wavelength telecentric and nontelecentric configurations,” Appl. Opt. 55, 5666–5683 (2016). [CrossRef]
17. J. Di, J. Zhao, W. Sun, et al., “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009). [CrossRef]
18. T. Colomb, E. Cuche, F. Charrière, et al., “Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation,” Appl. Opt. 45, 851–863 (2006). [CrossRef]
19. L. Miccio, D. Alfieri, S. Grilli, et al., “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007). [CrossRef]
20. C. Zuo, Q. Chen, W. Qu, et al., “Phase aberration compensation in digital holographic microscopy based on principal component analysis,” Opt. Lett. 38, 1724–1726 (2013). [CrossRef]
21. G. Coppola, G. D. Caprio, M. Gioffré, et al., “Digital self-referencing quantitative phase microscopy by wavefront folding in holographic image reconstruction,” Opt. Lett. 35, 3390–3392 (2010). [CrossRef]
22. D. Deng, W. Qu, W. He, et al., “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019). [CrossRef]
23. S. Liu, Q. Lian, Y. Qing, et al., “Automatic phase aberration compensation for digital holographic microscopy based on phase variation minimization,” Opt. Lett. 43, 1870–1873 (2018). [CrossRef]
24. Z. Ren, J. Zhao, and E. Y. Lam, “Automatic compensation of phase aberrations in digital holographic microscopy based on sparse optimization,” APL Photonics 4, 110808 (2019). [CrossRef]
25. Z. Chen, W. Zhou, H. Zhang, et al., “Phase aberration adaptive compensation in digital holography based on phase imitation and metric optimization,” Opt. Express 31, 21048–21062 (2023). [CrossRef]
26. Z. Z. Huang, F. Yang, B. Liu, et al., “Aberration-free synthetic aperture phase microscopy based on alternating direction method,” Opt. Lasers Eng. 160, 107301 (2023). [CrossRef]
27. T. Nguyen, V. Bui, V. Lam, et al., “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express 25, 15043–15057 (2017). [CrossRef]
28. T. Chang, D. Ryu, Y. Jo, et al., “Calibration-free quantitative phase imaging using data-driven aberration modeling,” Opt. Express 28, 34835–34847 (2020). [CrossRef]
29. L. Huang, J. Tang, L. Yan, et al., “Wrapped phase aberration compensation using deep learning in digital holographic microscopy,” Appl. Phys. Lett. 123, 141109 (2023). [CrossRef]
30. J. Tang, J. Zhang, S. Zhang, et al., “Phase aberration compensation via a self-supervised sparse constraint network in digital holographic microscopy,” Opt. Lasers Eng. 168, 107671 (2023). [CrossRef]
31. S. Ma, Q. Liu, Y. Yu, et al., “Quantitative phase imaging in digital holographic microscopy based on image inpainting using a two-stage generative adversarial network,” Opt. Express 29, 24928–24946 (2021). [CrossRef]
32. Z. Li, F. Wang, P. Jin, et al., “Accurate phase aberration compensation with convolutional neural network PACUnet3+ in digital holographic microscopy,” Opt. Lasers Eng. 171, 107829 (2023). [CrossRef]
33. K. Dabov, A. Foi, V. Katkovnik, et al., “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007). [CrossRef]
34. S. Montresor, M. Tahon, A. Laurent, et al., “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5, 030802 (2020). [CrossRef]
35. Q. Fang, H. Xia, Q. Song, et al., “Speckle denoising based on deep learning via a conditional generative adversarial network in digital holographic interferometry,” Opt. Express 30, 20666–20683 (2022). [CrossRef]
36. J. Wu, J. Tang, J. Zhang, et al., “Coherent noise suppression in digital holographic microscopy based on label-free deep learning,” Front. Phys. 10, 880403 (2022). [CrossRef]
37. J. Tang, B. Chen, L. Yan, et al., “Continuous phase denoising via deep learning-based on Perlin noise similarity in digital holographic microscopy,” IEEE Trans. Ind. Inf. 20, 8707–8716 (2024). [CrossRef]
38. P. Ferraro, S. D. Nicola, A. Finizio, et al., “Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging,” Appl. Opt. 42, 1938–1946 (2003). [CrossRef]
39. H. Huang, L. Lin, R. Tong, et al., “UNet3+: a full-scale connected U-Net for medical image segmentation,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2020), pp. 1055–1059.
40. K. He, X. Zhang, S. Ren, et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.
41. Q. Wang, B. Wu, P. Zhu, et al., “ECA-Net: efficient channel attention for deep convolutional neural networks,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 11531–11539.