Photonics Research, Volume. 12, Issue 11, 2703(2024)

Multifunctional computational fluorescence self-interference holographic microscopy Editors' Pick

Wenxue Zhang1, Tianlong Man1, Minghua Zhang1, Hongqiang Zhou1, Zenghua Liu2, and Yuhong Wan1、*
Author Affiliations
  • 1School of Physics and Optoelectronic Engineering, Beijing University of Technology, Beijing 100124, China
  • 2School of Information Science and Technology, Beijing University of Technology, Beijing 100124, China
  • show less

    Fluorescence microscopy is crucial in various fields such as biology, medicine, and life sciences. Fluorescence self-interference holographic microscopy has great potential in bio-imaging owing to its unique wavefront coding characteristics; thus, it can be employed as three-dimensional (3D) scanning-free super-resolution microscopy. However, the available approaches are limited to low optical efficiency, complex optical setups, and single imaging functions. The geometric phase lens can efficiently manipulate the optical field’s amplitude, phase, and polarization. Inspired by geometric phase and self-interference holography, a self-interference fluorescent holographic microscope-based geometric phase lens is proposed. This system allows for wide-field, 3D fluorescence holographic imaging, and edge-enhancement from the reconstruction of only one complex-valued hologram. Experiments demonstrate the effectiveness of our method in imaging biological samples, with improved resolution and signal-to-noise ratio. Furthermore, its simplicity and convenience make it easily compatible with existing optical microscope setups, making it a powerful tool for observing biological samples and detecting industrial defects.

    1. INTRODUCTION

    Optical microscopy, known for its non-invasive features and high spatial resolution advantages, has become a crucial tool in biomedical research for directly observing subtle structures. Recent advancements have significantly improved the ability to acquire multidimensional information [1,2] and dynamically observe living biological systems [3], largely driven by fluorescent dye technologies [4,5]. Fluorescence microscopes capture the entire field of view simultaneously, offering faster speeds compared to point-scanning methods [6,7]. The simplicity and cost-effectiveness of fluorescence microscopy have led to the development of commercial miniaturized systems [8]. Traditional fluorescence wide-field microscopes only capture the two-dimensional intensity of complex 3D samples, lacking phase information. In contrast, holography can achieve 3D imaging by recording interference patterns dependent on spatial positions [9]. This unique feature makes holography particularly attractive in microscopy, enabling the capture of 3D object information with fewer exposures. Holography eliminates the need to acquire multiple image stacks for 3D reconstruction [1012], offering simplicity, speed, and high efficiency. However, fluorescence is incoherent, and current holographic techniques primarily rely on coherent light, leading to speckle noise and environmental disturbances.

    Fresnel incoherent correlation holography (FINCH) provides 3D imaging without laser illumination or mechanical scanning [1315]. It simplifies holography, particularly in fluorescence holographic microscopy [16]. FINCH offers 1.5–2 times better lateral super-resolution compared to classical systems [17,18]. Traditional spatial light modulator methods introduce complexity and optical aberrations [16]. The beam-splitting method in Michelson interferometers increases light energy loss [19]. Birefringent lenses improve optical efficiency [20,21], but they easily introduce errors due to manufacturing and optical vibrations. Additionally, edges play an essential role in observing subtle structure [22]. Optical edge detection methods [23,24] offer advantages over computer-based approaches, including adaptability, speed, parallel processing, and information capacity [25]. Optical edge detection methods typically employ 4f optical systems [23,25,26], utilizing spiral phase modulation. Incoherent digital holography with spiral phase modulation [27,28] opens new frontiers under incoherent light. However, FINCH is characterized by heavy background noise, resolution limits, and high costs due to complex setups. Meanwhile, multi-modal microscopes are bulky, expensive, and not portable. Remarkably, the geometric phase method has obvious advantages in reducing the holography system complexity and improving the wave-splitting efficiency [14,15,2931].

    There is an urgent need for microscopy capable of wide-field, incoherent holography, and edge imaging simultaneously, enabling practical applications. To address these challenges, we have developed a computational multifunctional self-interference digital holographic microscope utilizing the geometric phase lens (GP-SIDH), overcoming conventional optical component constraints. Computational adaptive optics (CAO) and computational edge enhancement (CEE) algorithms are combined to reconstruct the hologram. Our proposed microscope can achieve simultaneous traditional wide-field, 3D fluorescence holographic imaging, and edge enhancement. It seamlessly transitions between imaging modes via a specially designed wave-splitting device, capturing complex-valued holograms on the same plane for computational reconstruction. Despite integrating three imaging modalities, the microscope maintains a compact, versatile design employing polarization and spatial multiplexing techniques. Experimental results demonstrated the system’s effectiveness in observing biological samples, enabled rapid and reliable imaging in various modes, and provided higher-resolution views of subtle structures and edge details. Notably, the suggested GP-SIDH microscope can be easily integrated into current optical setups because of its simplified self-interference components and the advanced reconstruction algorithms introduced. Subsequently, we will present the fundamental principles, experimental validations, and results analysis in sequence.

    2. METHOD

    A. Principles of GP-SIDH

    The conventional wide-field imaging process is illustrated by Fig. 1(a), where the camera can only capture the intensity information near the system’s imaging plane; consequently, the three-dimensional information is lost. The optical configuration of the GP-SIDH system is depicted in Fig. 1(b). The point object is illuminated with incoherent light to guarantee complete incoherence between any two points on the object. The diffracted light from the object is collected by the lens, located at a distance d0 from the object. The focal length of lens is f. Next, the object light is transformed into linearly polarized light by polarizer P1 and then propagates through a diffraction transmission distance di to reach the GP lens. The GP lens, fabricated from a liquid crystal polymer material with birefringent properties, modulates the diffracted light. It acts as a positive lens for left-handed circularly polarized light and a negative lens for right-handed circularly polarized light, as shown in Fig. 2(a). The linearly polarized object light, upon being split by the GP lens, is converted into right-handed circularly polarized light and left-handed circularly polarized light. These components undergo polarization alignment by polarizer P2 to have the same polarization orientation. The interference between two parts of the object light occurs at the image plane located at a distance ds, and the interference pattern is recorded by the imaging device.

    Schematic illustration of a wide-field and GP-SIDH imaging system.

    Figure 1.Schematic illustration of a wide-field and GP-SIDH imaging system.

    Imaging schematic for multifunctional GP-SIDH microscopy. (a) The property of the GP lens. (b) The principle of the geometric phase shift. (c) The process of computational reconstruction.

    Figure 2.Imaging schematic for multifunctional GP-SIDH microscopy. (a) The property of the GP lens. (b) The principle of the geometric phase shift. (c) The process of computational reconstruction.

    We assume that the Fresnel approximation condition is satisfied in the optical setup of Fig. 1(b). The single point object is diffracted to the front surface of geometric phase lens. The light field can be expressed as Ugp=Q(1d0)Q(1f)*Q(1di),where Q(1/a)=exp[jπ(x2+y2)/(λa)], x and y are spatial coordinates, λ is the central wavelength, j is the imaginary unit, and * represents convolution.

    The GP lens modulates as a positive lens on the left-circularly polarized beam. For the right-circularly polarized beam component, it exhibits a negative lens modulation. The intensity distribution on the CCD plane is Ii=|12Q(1d0)Q(1f)*Q(1di)Q(1fgp)ejφabejθi*Q(1ds)+12Q(1d0)Q(1f)*Q(1di)Q(1fgp)ejφabejθi*Q(1ds)|2.

    In the equation, φab is the term corresponding to optical aberrations and θi is the ith phase shift value. Multiple phase-shifted holograms are recorded to acquire a complex-valued hologram devoid of twin images and zero-level terms [13,16]. As described in Fig. 2(b), when P1 is rotated by π/4, the system induces a geometric phase shift of π/2. Consequently, varying the angle of P1 (0, π/2, π, 3π/2) yields holograms of the object with corresponding phase shift values. The function of linear polarizer P2 is to align two circularly polarized lights with different handedness into the same polarization direction, thus enabling interference.

    The object hologram is the incoherent summation of all point sources holograms, at different object distances. It is expressed as the convolution integral between the intensity transmittance function of the object Oi and point source hologram Ii, OHi=Oi*Ii,where OHi is the ith phase shifted hologram, and the phase shifting values are 0, π/2, π, 3π/2, respectively.

    The object’s complex-valued hologram Ha can be represented as Ha=(OH1OH3)+j(OH2OH4).

    B. Principles of Hologram Reconstruction

    The reconstructed images of the 3D object can be obtained by the back-propagation algorithm, which can be expressed as O(x,y,zr)=F1{F[Ha(x,y)]T(fx,fy)}.

    Here, F and F1 denote the 2D Fourier transform and the 2D Fourier inverse transform, respectively. fx and fy represent the frequency domain coordinates corresponding to the space coordinates x and y, respectively. What is extremely different from intensity images is that O(x,y,zr) theoretically contains all the object information (amplitude and phase).

    As shown in Fig. 2(c), in the SIDH conventional reconstruction, the angular spectrum method is usually used to reconstruct digital holography, which based on the diffraction theory of light waves, can accurately describe the diffraction propagation phenomenon of light in the frequency domain. The transfer function of the angular spectrum propagation in free space can be expressed as T(fx,fy)=exp[2jπzrλ1λ2(fx2+fy2)].zr is the reconstruction distance, which can be obtained by the autofocus algorithm.

    Optical aberrations are widely recognized as detrimental to microscopy imaging fidelity. These aberrations can be rectified through the estimation and subsequent correction of an appropriate phase mask within the spatial frequency domain of the reconstructed images [32]. The basic principle of the CAO correction process on reconstructed complex-valued image can be obtained as follows: OAO(x,y,zr)=F1{F[O(x,y,zr)]·ej·φAO}.

    The computed phase distribution of optical aberration can be expressed by a combination of Zernike polynomials [1,3]. The CAO algorithm is used to estimate and correct an optimal set of values corresponding to the evaluation metric of the reconstructed image. In this work, we employed the SPGD optimization algorithm [33,34] along with the Tenengrad evaluation function. Further details regarding these methodologies can be found in our previous work [35].

    We modified the transfer function, and a vortex phase factor and an amplitude constraint factor were introduced into the conventional transfer function. The background noise can be well suppressed, and the edge becomes sharper with higher contrast, T1(fx,fy)=ρwexp[(ρw)2]circ(ρR0)exp(jψ)T(fx,fy).

    Here, w is a parameter to control the position of the maximum amplitude. R0 represents the size of the aperture. The vortex phase distribution can be expressed as ψ=lφ. (φ, ρ) are the polar coordinates. l is the topological charge number, and the edge becomes the sharpest with the highest contrast when l=1.

    3. EXPERIMENT AND RESULTS

    A. Fluorescence Self-Interference Holographic Microscopy

    The performance of the proposed method was investigated on a fluorescence self-interference holographic microscope, as depicted in Fig. 3. A 457 nm solid-state laser (85-BLT-605, Melles Griot) was employed as the excitation light source. The output light from the fiber was collimated and focused to ensure sufficient illumination area on the sample plane. After reflection by a dichroic mirror (Q495lp, Olympus), the light was focused onto the back focal plane of the microscope objective (100×, NA 1.3, Olympus) to illuminate the sample with a wide field of view. The fluorescence sample was scanned by nano-z scanning translation stage (P-736, ZR2S, PI). The emitted fluorescence was collected by the same objective and converged by lens F1 (f1=200  mm). Subsequently, the fluorescence was guided by lens F2 (f2=200  mm) after passing through the dichroic mirror (DM), filtered through filter (central wavelength of 532 nm and bandwidth of 10 nm), directed to a commercial geometric phase lens (LBTEK, central wavelength λ=532  nm, f=±2500  mm), and finally to EMCCD camera (IXON897, 1024×1024 pixels, 13 μm, Andor). Linear polarizers P1 and P2 were utilized to adjust the polarization direction of the beams.

    Schematic of the fluorescence holographic microscope. (a) Experiment setup. (b) Wide-field model images. (c) GP-SIDH model images, captured at different axial positions, respectively.

    Figure 3.Schematic of the fluorescence holographic microscope. (a) Experiment setup. (b) Wide-field model images. (c) GP-SIDH model images, captured at different axial positions, respectively.

    The imaging processes of the wide-field and GP-SIDH microscope are elaborated, with images of objects at different axial positions being captured by the camera through a nano-z scanning translation stage. The upper right corner of Fig. 3(a) illustrates the wave-splitting effect of the GP lens on linearly polarized light. Throughout the transition between different imaging modes, the camera position remained fixed, ensuring a high system stability. The schematic diagrams of the imaging process are shown in Figs. 3(b) and 3(c). In the wide-field imaging mode, the focal plane imaging results of 500 nm diameter fluorescent microspheres and the fluorescence resolution board are outlined in the blue box. In GP-SIDH microscope model, due to the beam-splitting function of the GP lens, two images are generated at the camera when the target is located at different axial positions, as indicated by the red box. Within a certain axial range, both images can undergo self-interference, but there exists an optimal interference position between the two images where the reconstructed image achieves the best resolution and signal-to-noise ratio.

    The lateral and axial resolutions of the proposed microscope were calibrated by 500 nm three-dimensional fluorescent microspheres and the resolution board target. Four holograms with different phase shift values are, respectively, shown in Figs. 4(a) and 4(c), revealing that spatial positional information is recorded in the form of interference fringes. Figure 4(b) shows the intensity distributions of reconstructed xy and xz planes of microspheres at different spatial depths, 29.24 μm, 30.6 μm, and 40.24 μm. As shown in Fig. 4(d), the optimal reconstructed image can be obtained via the automatic focusing algorithm based on the image gradient evaluation. For comparison, an image stack of 40 μm axial depth under wide-field imaging mode [Fig. 4(e)] was achieved by the nano z-stage. Figure 4(f) illustrates the wide-field image of the resolution target. Figures 4(g) and 4(h) display the intensity distributions of the point spread function (PSF) in the xy and xz planes of 500 nm fluorescent microspheres under two imaging modes. It is obvious that the SIDH imaging results exhibit higher lateral resolution and higher signal-to-noise ratio compared to the wide-field imaging. Unfortunately, the axial resolution is lower than that of wide-field imaging.

    Experimental results of fluorescent microspheres and resolution plate target. (a), (c) Recorded holograms, with different phase shifting values. (b) GP-SIDH refocused on different spatial depths. (d) GP-SIDH reconstruction results. (e) Wide-field images at different spatial depths. (f) Wide-field image on the focus. (g), (h) xy and xz intensity distribution of 500 nm microspheres.

    Figure 4.Experimental results of fluorescent microspheres and resolution plate target. (a), (c) Recorded holograms, with different phase shifting values. (b) GP-SIDH refocused on different spatial depths. (d) GP-SIDH reconstruction results. (e) Wide-field images at different spatial depths. (f) Wide-field image on the focus. (g), (h) xy and xz intensity distribution of 500 nm microspheres.

    B. Demonstration of Computational Aberration Correction

    The microscopy imaging system suffers the image degradation problem that is caused by the system or sample-introduced optical aberrations. In particular, optical aberrations generated by biological samples tend to be highly complex and anisotropic. We proposed a computational aberration correction method based on self-interference digital holography, employing frequency-domain fitting of Zernike polynomials and optimization using the SPGD algorithm. Due to the presence of optical aberrations, the intensity distribution of the reconstructed images of fluorescent microspheres in the xy plane [Fig. 5(a)] and xz plane [Fig. 5(c)] becomes distorted. As demonstrated in Figs. 5(d) and 5(f), 3D reconstructions can be effectively corrected through the proposed method. The correction phase masks (inserted) were calculated by the algorithm. Figures 5(b) and 5(e) depict the Fourier spectrum, respectively.

    Aberration correction results. (a), (c) Uncorrected microsphere xy and xz plane images. (d), (f) Corrected xy and xz plane images. (b), (e) are the Fourier spectrum of (a), (d). (g) Reconstructed aberrated images. (h) Corrected images. (i) Wide-field images. (g1), (g2) to (i1), (i2) are the amplification of (g)–(i), acting-labeled in U2OS cells. (j) Intensity of underlines, respectively.

    Figure 5.Aberration correction results. (a), (c) Uncorrected microsphere xy and xz plane images. (d), (f) Corrected xy and xz plane images. (b), (e) are the Fourier spectrum of (a), (d). (g) Reconstructed aberrated images. (h) Corrected images. (i) Wide-field images. (g1), (g2) to (i1), (i2) are the amplification of (g)–(i), acting-labeled in U2OS cells. (j) Intensity of underlines, respectively.

    To validate the performance of proposed fluorescence holographic microscopy for observing biological samples, imaging of fluorescently labeled F-actin was conducted. U2OS cells were cultured in 35 mm diameter polystyrene culture dishes (35 mm TC-Treated Culture Dish, Corning). A 4% paraformaldehyde solution was used to fix the cells. Following fixation, the cells were rinsed twice with phosphate-buffered saline (PBS). Subsequently, the F-actin in the cells was subsequently stained with a dye (Alexa Fluor 488, phalloidin). The cover glass was washed twice with PBS, and then fluorescence antifade mounting medium (Prolong Diamond Antifade Mountant with DAPI, Life Technologies) was added to sample before mounting for final fixation. The conventional holographic reconstruction results shown in Fig. 5(g) suffer from aberrations due to the complex cell structures and mismatched refractive indices, resulting in blurred reconstructions with indiscernible details. Figure 5(h) illustrates the results after computational aberration correction, with the phase mask utilized for compensation depicted in the top right corner. For comparison, wide-field imaging results are depicted in Fig. 5(i), with regions of interest magnified for clarity. The intensity distribution of distinct color-coded regions is depicted in Fig. 5(j). Experimental results demonstrate that the proposed holographic microscope has the imaging capability of complex biological cells. The computational correction method effectively rectifies optical aberrations, significantly enhancing the resolution of actin structures and overall image quality.

    C. Imaging Performance of Multifunctional Microscopy

    Figure 6 presents experimental results of the proposed multifunctional microscope. COS7 cells were observed, with F-actin exhibiting enhanced detail and three-dimensional structural information. The cell processing procedures align with those described earlier. Figures 6(a1) and 6(a4) depict the reconstructed xy and xz plane intensity distributions obtained using conventional methods, with an axial depth of 20 μm. Figures 6(a2) and 6(a3) represent the magnified regions in Fig. 6(a1). Figures 6(a1) and 6(a4) illustrate the reconstructed xy and xz plane intensity distributions obtained using computational aberration correction methods, with the calculated phase distribution shown in the top right corner. Figures 6(b2) and 6(b3) represent magnified regions of interest within Fig. 6(b1). Noticeable enhancements in image resolution (indicated by white arrows) and increased detail clarity are evident in the corrected images. Figures 6(c1) and 6(c4) display the reconstructed edge images of xy and xz planes obtained using complex amplitude vortex phase filtering transfer functions, with Figs. 6(c2) and 6(c3) representing magnified regions in Fig. 6(c1). Additionally, wide-field imaging results can be easily obtained without changing the system configuration [Figs. 6(d1) and 6(d4)], facilitated by the nano z-stage. Figures 6(d2) and 6(d3) depict magnified regions in Fig. 6(d1). In summary, our proposed fluorescence holographic microscope can capture holograms and three computationally reconstructed imaging mode results: aberration corrected, edge, and wide-field imaging, all without altering the system setup.

    Imaging performance of the multifunctional microscope. (a)–(d) Reconstructed aberrated images, corrected images, edge images, wide-field images. (a1)–(d1) and (a4)–(d4) are the xy and xz intensity distribution. (a2), (a3), (b2), (b3), (c2), (c3), and (d2), (d3) are the amplification of (a1)–(d1), acting-labeled in COS7 cells, respectively.

    Figure 6.Imaging performance of the multifunctional microscope. (a)–(d) Reconstructed aberrated images, corrected images, edge images, wide-field images. (a1)–(d1) and (a4)–(d4) are the xy and xz intensity distribution. (a2), (a3), (b2), (b3), (c2), (c3), and (d2), (d3) are the amplification of (a1)–(d1), acting-labeled in COS7 cells, respectively.

    4. CONCLUSION

    In summary, we developed a novel self-interference fluorescence holographic microscope approach using the geometric phase lens. Initially, we introduced the basic principles of multifunctional self-interference holography. Computational adaptive optics and edge enhancement algorithms are employed to digitally reconstruct holograms. We constructed a fluorescence holographic microscope system to validate the effectiveness of multifunctional computational imaging. The experimental results of actin-labeled U2OS and COS7 cells demonstrated that our approach is high-performance with biological samples and can be used to observe 3D subtle structures in cells after the correction of aberrations. To the best of our knowledge, this is the first demonstration of a fluorescence holographic microscope offering three imaging modes. Conventional wide-field imaging enables rapid focusing on the plane of interest within cells. Fluorescence holographic imaging provides 3D structural and positional information of the sample. Edge imaging reliably detects and identifies detailed information regarding cell boundaries. The proposed microscope maintains its compact size across all optical modes, ensuring versatility without compromising portability. Meanwhile, the integration of a camera with polarizer arrays and electronic phase changes enhances hologram acquisition efficiency. We expect that this method will pave the way for new opportunities in biomedical research and industrial defects, enhancing the capabilities of existing microscopes in the field.

    [11] Y. Gao, L. Cao. Iterative projection meets sparsity regularization: towards practical single-shot quantitative phase imaging with in-line holography. Light Adv. Manuf., 4, 6(2023).

    [34] J. L. Pech-Pacheco, G. Cristobal, J. Chamorro-Martinez. Diatom autofocusing in brightfield microscopy: a comparative study. 15th International Conference on Pattern Recognition, 3, 314-317(2000).

    Tools

    Get Citation

    Copy Citation Text

    Wenxue Zhang, Tianlong Man, Minghua Zhang, Hongqiang Zhou, Zenghua Liu, Yuhong Wan, "Multifunctional computational fluorescence self-interference holographic microscopy," Photonics Res. 12, 2703 (2024)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems, Microscopy, and Displays

    Received: Jun. 20, 2024

    Accepted: Sep. 15, 2024

    Published Online: Nov. 1, 2024

    The Author Email: Yuhong Wan (yhongw@bjut.edu.cn)

    DOI:10.1364/PRJ.533485

    Topics