Chinese Optics Letters, Volume. 22, Issue 6, 060007(2024)

Dynamic imaging through scattering medium under white-light illumination [Invited]

Junyao Lei, Hui Chen*, Yuan Yuan, Yunong Sun, Jianbin Liu, Huaibin Zheng, and Yuchen He
Author Affiliations
  • Electronic Material Research Laboratory, Key Laboratory of the Ministry of Education and International Centre for Dielectric Research, Xi’an Jiaotong University, Xi’an 710049, China
  • show less

    Imaging objects hidden behind turbid media is of great scientific importance and practical value, which has been drawing a lot of attention recently. However, most of the scattering imaging methods rely on a narrow linewidth of light, limiting their application. A mixture of the scattering light from various spectra blurs the detected speckle pattern, bringing difficulty in phase retrieval. Image reconstruction becomes much worse for dynamic objects due to short exposure times. We here investigate non-invasively recovering images of dynamic objects under white-light irradiation with the multi-frame OTF retrieval engine (MORE). By exploiting redundant information from multiple measurements, MORE recovers the phases of the optical-transfer-function (OTF) instead of recovering a single image of an object. Furthermore, we introduce the number of non-zero pixels (NNP) into MORE, which brings improvement on recovered images. An experimental proof is performed for dynamic objects at a frame rate of 20 Hz under white-light irradiation of more than 300 nm bandwidth.

    Keywords

    1. Introduction

    In traditional imaging methods, including traditional lens imaging or coherence diffraction imaging, the information of the light transmission is determinable, e.g., the point-spread-function (PSF) of the system can be resolved. However, when light is scattered by turbid media, the information of the propagation is scrambled and cannot be resolved directly or with a simple formula, resulting in difficulty in imaging. Unfortunately, such a scattering situation is frequently happening in our common lives, e.g., such as atmospheric disturbances in astronomical imaging, biological tissues in medical imaging, and foggy weather in daily life[13]. Along with the technological developments, there are more and more demands for research on how to overcome the limitations of traditional imaging methods and realize the imaging of objects hidden in scattering media so that people can observe the morphological structure and other appearance information of the target even when they cannot see it directly. This not only has important scientific research value but also has great potential and practical value in industry and daily life.

    During the past decades, many methods and techniques have been proposed, for example, the wavefront modulation technique[49], the optical transmission matrix measurement technique[1014], the scattering holography technique[1518], and the speckle correlation imaging technique[1922]. The principle of the wavefront modulation technique is to achieve super diffraction-limited focusing and imaging through the scattering medium by precisely controlling the spatial light modulator (SLM). However, this usually requires an auxiliary guide star or another known object as a reference in the target plane. The optical transmission matrix measurement technique uses the transmission matrix to fully characterize the effect of the scattering medium to reconstruct the image of the hidden object, which is based on the principle of using the two-dimensional transmission matrix to characterize the scattering system as a linear system and using the spatial light modulator and full-field phase-shift interferometry to successfully measure the optical transmission matrix of the complex scattering medium, which requires very high accuracy in transmission matrix determination.

    The speckle correlation method (originated from speckle interferometry) seeks the Fourier magnitude of an object based on the memory effect and reconstructs the image with phase retrieval operation[2329]. This method has the advantages of being non-invasive, simple, and less computationally consuming and has attracted a lot of attention during the last two decades[19,20]. However, phase retrieval algorithms such as hybrid input-output (HIO) and error reduction (ER) are quite vulnerable to noise (from the environment or detection system)[30], making speckle correlation hard to deal with in low signal-to-noise ratio (SNR) situations. A method called multi-frame OTF retrieval engine (MORE) was proposed to work for non-invasive imaging under low SNRs[31], which not only introduces the optical-transfer-function (OTF) constraint in the iteration process but also simultaneously recovers the OTF and multiple sub-objects together, bringing high stability for phase retrieval.

    When light transmits through a turbid medium, the interference among the scatterers forms a random-like diffraction pattern (so called a speckle pattern). Different wavelengths construct different speckle patterns. Therefore, the PSF under broad spectral illumination is a superposition of different patterns, which makes it blurred—not only the size of the speckles’ grains (determining the resolution) is broadened but also the background of the PSF gets high. This will cause a low SNR of the detectable speckle patterns, making imaging under a broad spectrum much more difficult than under a narrow bandwidth. Since applications with broad spectra of light are inevitable, scientists have conducted much research on this topic during the last several years[3237]. Wu et al. introduced the R-autocorrelation approach to increase the contrast of a PSF by randomly selecting and averaging different sub-regions of the speckle patterns[32]. In the work of Sun et al., by acquiring and processing speckles with polarization information, the contrast of the speckles can be improved to achieve scattering imaging under broadband illumination[33]. Lu et al. introduced the OTF constraint into the scattering imaging under broadband illumination and successfully achieved correct results[35]. Deep learning was also applied for scattering imaging under white light illumination, which however requires a large amount of sample data for end-to-end learning[34]. Furthermore, MORE (including the OTF constraint and multi-frame reconstruction) was employed for imaging under very broad spectra, as well as multi-spectrum[37].

    The imaging dynamic object is also inevitable for realistic applications. Many studies have been carried out on the imaging of dynamic objects in scattering media, such as digital holographic technology[15], the “shower curtain effect”[22], deep learning[38,39], and the MORE technique[31]. Nevertheless, these studies were under narrow spectra. Since the short exposure time due to dynamic capture will further reduce the SNR of the image, imaging of dynamic objects in white light can be regarded as low SNR imaging in more extreme cases, where merely using MORE might fail. In this paper, we introduce constraint on the number of non-zero pixels (NNP)[40] into MORE and extend it to deal with more severe low SNR cases. The experimental results and relevant simulation show that MORE can faithfully perform dynamic imaging at a fast convergence rate with just a few iterations for dynamic objects under a broad spectral illumination (more than 300 nm bandwidth). MORE does not require any calibration or preprocessing, and it utilizes several captured scattering patterns to quickly reconstruct the phase of the OTF (PTF) and then directly computes all images with the obtained PTF. Since MORE retains the relative position and orientation information of the moving object at different moments, we can simply put all the recovered images together to create a video without worrying about image misalignment.

    2. Theory

    When the PSF of an imaging system is shift invariant, the intensity distribution on detection plane I(x,y;λ) can be described as a convolution of the PSF S(xξ,yη;λ) and an object function O(ξ,η) which is assumed to be spectrum insensitive, I(x,y;λ)=[OS](x,y;λ)O(ξ,η)·S(xξ,yη;λ)dξdη,where (x,y) are the coordinates of the detection plane; (ξ,η) are the coordinates of the object plane.

    In a traditional lens imaging system, S(xξ,yη;λ) is a single-peak function, and I(x,y;λ) directly exhibits the image of the object. In a scattering scenario, S(xξ,yη;λ) is of multiple peaks (speckle-like), causing I(x,y;λ) to have a random mixture of multiple images, which needs algorithms to decode the image. On the other hand, a different wavelength results in a different PSF, as well as a different convolution. For a broad spectrum, the overall intensity distribution is a summation of all the different spectra, so is the overall PSF, Γ(x,y)=λI(x,y;λ)=[OS](x,y)O(ξ,η)·λS(xξ,yη;λ)dξdη,where S exhibits a low contrast with blurred speckle grains. Γ(x,y) also has a quite large background, and the effective signal would be easily submerged under the fluctuation of the background, bringing difficulty in reconstructing an image under such a low SNR. The Fourier transfer of the above equation is written as Γ˜(u,v)=O˜(u,v)·S˜(u,v).

    The tide symbol denotes a Fourier operation. (u,v) are the coordinates of the Fourier domain. The total OTF is S˜(u,v)=λS˜(u,v;λ)=|S˜(u,v;λ0)|·λeiϕ(u,v;λ)=|S˜|eiΦ(u,v)eiΦ(u,v),where λ0 is the central frequency. The magnitude of the OTF (MTF) of each single spectrum, |S˜(u,v;λ)|, has a similar shape, which is determined by the pupil function of the aperture, while the phase of the OTF (PTF), eiΦ(u,v;λ), varies along with different spectra. The total MTF is approximately flat within the diffractive limitation range[31]. Thus, the diffractive-limited image can be calculated via the PTF, M(x,y)=F1{Γ˜(u,v)·eiΦ(u,v)}.

    Instead of recovering an image from a captured Γ˜(x,y), we reconstruct the PTF from multiple captured frames of different sub-objects or different states of a dynamic object, denoted as {Γ˜f(x,y)} with f indexing the sub-objects. This method is named MORE, which has been proved to be capable of non-invasive imaging at a low signal-to-noise ratio[23] and under broad spectra.

    On the other hand, since Γ(x,y) has a very high background, the signal above the background is relatively small. A camera has a limited dynamic range, making the measurement of the signal inaccurate. Thus, a broad spectrum would result in a low detection SNR. This would deteriorate the reconstruction. We therefore introduce the NNP constraint to improve the reconstructed image quality. The procedure of MORE is described as below. Guess an initial PTF, Φ0.Start from j=1, where j denotes the jth iteration.Start from f=1, where f denotes the fth frame.Use Φj1 to calculate the fth image: Mf(x,y)=F1{Γ˜f(u,v)·eiΦj1}.Apply realness, non-negative, and support constraints, Mf(x,y)={Re{Mf(x,y)},(x,y)ΩRe{Mf}00(x,y)ΩRe{Mf}<0.Apply the NNP constraint to Mf(x,y) when j>Ps, where Ps is a starting point. Only reserve Nnp pixels of the first highest values in Mf(x,y). The other pixels are set to zero.Update PTF: Φj=arg{Γ˜f}arg{M˜f}.If f=fmax, then go to step (9); otherwise ff+1 and go to step (4).If f<fmax, exit; otherwise jj+1, go to step (3).

    In step (5), the real part of any pixel in Mf(x,y) is reserved, and the imaginary part is directly removed. Meanwhile, the real part should not be negative; otherwise the pixel must be set to zero. Ω is an estimate area existing in the object, and that outside of Ω is set to zero.

    In step (6), since the reconstruction images at the very first iterations are far from the correct ones, the NNP constraint would have an adverse side effect. The NNP constraint is only activated after the Psth iteration.

    3. Experiments

    The experimental setup is shown in Fig. 1. A projector projects a picture onto an object plane, simulating a self-emitting object. Light from the object plane propagates to a diffuser (a 220-grit ground glass) and then gets scattered towards a CCD. Right behind the diffuser is a circular aperture with a diameter of 5 mm. The resolution of the CCD is 5496×3672. Each pixel is 2.4 µm large. The distance from the target plane to the diffuser is u=110cm. The distance from the CCD to the diffuser is v=10cm. The effective magnification of the scattering lens is M=v/u=1/11. The exposure time of the camera is set to 50–300 ms for recording the scattering pattern.

    Schematic diagram of the experimental setup.

    Figure 1.Schematic diagram of the experimental setup.

    3.1. Experiment for dynamic objects under white light irradiation

    The project plays a movie to simulate a dynamic target. The object size is 1cm×1cm. The CCD captures a sequence of speckle-like patterns with a 20 Hz frame rate. Five frames are randomly selected and input into the MORE algorithm, which finally reconstructs the PTF and is then used to recover all the images from the captured frames. Note that the NNP constraint is turned off. We test two kinds of object: (1) five letters successively passing through a small aperture on the object plane, where the aperture size is 1cm×1cm and (2) a rotating letter “E.” Figure 2 exhibits the results.

    Samples of the reconstructed video with MORE. (a) Video samples for the five translating letters at 50 ms exposure time. See Visualization 1. (b) Video samples for the rotating letter “E” at 50 ms exposure time. See Visualization 2.

    Figure 2.Samples of the reconstructed video with MORE. (a) Video samples for the five translating letters at 50 ms exposure time. See Visualization 1. (b) Video samples for the rotating letter “E” at 50 ms exposure time. See Visualization 2.

    Traditional phase retrieval algorithms use independent phase retrieval for each scattering pattern, resulting in an uncertain position or orientation of each state of the object. Differently, the MORE algorithm uses PTF deconvolved for all states of the object, which preserves the relative position and orientation information of the moving objects at different moments so that during the recovery process of dynamic imaging we can simply stack the recovered images together to create a video without any image-to-image processing and without worrying about misalignment between each frame.

    3.2. Experiments on the recovery of objects under white light irradiation by MORE with NNP constraint

    We take the scattering patterns of the object “A” to “E” separately. Figures 3(b) and 3(c) are the recovery results using MORE without the NNP under 50 ms exposure time and 300 ms exposure time, respectively. The contrast of the captured speckle patterns at 50 ms exposure time is 4.5%. Note that contrast = std/mean, with std standing for standard deviation.

    Recovery of static objects under white light exposure experiment 1. (a) The objects “A” to “E,” (b) recovered images with MORE under 50 ms exposure time, and (c) recovered images with MORE under 300 ms exposure time.

    Figure 3.Recovery of static objects under white light exposure experiment 1. (a) The objects “A” to “E,” (b) recovered images with MORE under 50 ms exposure time, and (c) recovered images with MORE under 300 ms exposure time.

    (a) shows the objects A–E projected by the projector, (b) shows the scattering pattern of the original corresponding object recorded by the camera under 300 ms exposure time, (c) shows the recovery of scattering of (b) by MORE algorithm with real and non-negative constraints, and (d) shows the recovery of scattering of (b) by MORE algorithm with the addition of non-zero-pixel constraint.

    Figure 4.(a) shows the objects A–E projected by the projector, (b) shows the scattering pattern of the original corresponding object recorded by the camera under 300 ms exposure time, (c) shows the recovery of scattering of (b) by MORE algorithm with real and non-negative constraints, and (d) shows the recovery of scattering of (b) by MORE algorithm with the addition of non-zero-pixel constraint.

    We then turn on the NNP constraint and investigate how well it can improve the reconstruction. The NNP can be estimated from the autocorrelation of the object[26]. Theoretically, the pixel number of the autocorrelation of an object is four times larger than that of the object. However, under very noisy circumstances, the autocorrelation from the measured data might be very different from what it is supposed to be. The estimation of the NNP should be larger than that of the original object. In the following, we enlarge the NNP by a factor (denoted as α) and see how the factor affects the recovered results.

    As shown in Fig. 5, at 300 ms exposure time (a less noisy circumstance), the tightest NNP (α=1) leads to the best reconstruction. The NNP constraint obviously improves the reconstruction. Nevertheless, at 50 ms exposure time (a more noisy circumstance), the tightest NNP results in failure of the reconstruction. This is the reason why the plot does not show the data point from α=1 to α=1.4. It has a peak at α=1.8. It suggests that an NNP around two times larger than the original one would also effectively improve the reconstruction.

    SSIM versus the magnification factor of the original NNP.

    Figure 5.SSIM versus the magnification factor of the original NNP.

    We next investigate the optimal starting iteration (Ps) when the NNP is turned on the NNP constraint. According to Fig. 6, the later the NNP involves, the better the reconstruction is.

    SSIM of the recovered images with or without adding NNP constraint at different exposure times.

    Figure 6.SSIM of the recovered images with or without adding NNP constraint at different exposure times.

    In order to measure the spatial resolution of the imaging system, the USAF1951 resolution plate was used as the object. The reconstruction images are shown in Fig. 7.

    (a) USAF1951 resolution board. The red rectangle indicates the part used as the object. (b) Recovered image of elements 3 and 4 of group 3 at 300 ms exposure time. (c) Recovered image of element 4 of group 3 at 300 ms exposure time.

    Figure 7.(a) USAF1951 resolution board. The red rectangle indicates the part used as the object. (b) Recovered image of elements 3 and 4 of group 3 at 300 ms exposure time. (c) Recovered image of element 4 of group 3 at 300 ms exposure time.

    Element 4 of group 3 indicates a resolution of 22 µm. According to the spatial resolution formula of the system Resolution=ZλnD17.5μm,where λ=700nm, D=8mm is the size of the circular aperture in front of the diffuser, and Z=200mm is the distance between the object and the diffuser, since the broadband makes the PSF blurred, the actual resolution should be worse than 17.5 µm. The experimental result is close to the theoretical prediction.

    4. Simulation

    A simulation platform is built up to simulate the experiment, in order to investigate how the NNP affects the performance of MORE. The diffuser is simulated with random phases. The light propagating from the diffuser constructs an interference pattern in the Fresnel zone using the Rayleigh–Sommerfeld solution. By this way, we calculated 100 sub-PSFs at different wavelengths between 400 nm and 700 nm in an increment of 3 nm. Adding the 100 PSFs together produces the total PSF of the system. The speckle pattern of an object is formed by calculating the convolution of the PSF and the object function. By adding random noise to the PSF, different SNR situations can be simulated.

    As shown in Fig. 8, at a certain noisy situation, MORE without the NNP cannot recover images with a feasible quality. As soon as the NNP is turned on, correct images are reconstructed. Figures 8(b)8(h) show the recovery results of MORE with NNP under white light, where the amplification coefficients of NNP are 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, and 2.2, respectively. It shows that the best magnification coefficient is around 1.4–1.8. When the amplification factor is greater than 2.0, the reconstruction effect gradually deteriorates. This is consistent with the experimental results.

    Simulation results with different magnification factors of NNP under white light irradiation.

    Figure 8.Simulation results with different magnification factors of NNP under white light irradiation.

    5. Discussion

    Phase retrieval can be thought of as a problem of solving a system of equations. It aims to recover the phase of an object from magnitude measurements. The unknown phases are the solution to a system of equations. If the number of unknown phases is more than the number of independent equations, it becomes an ill-posed problem. Unfortunately, noise would deteriorate the accuracy of the equations, equivalently decreasing the number of independent equations, causing phase retrieval vulnerable to low SNRs. A support constraint can highly reduce the number of unknown variables, increasing the reliability of a phase retrieval. A tight support can effectively increase the reliability of reconstruction[29]. An NNP constraint can reduce much more unknown phases than the corresponding support, bringing more reliability than the support constraint itself. Moreover, MORE simultaneously recovers the phase of the OTF and five sub-objects. They mutually reinforce each other to faithful convergence: a correct recovered result of a sub-object will lead to a correct reconstruction of the OTF phase as well as the other sub-objects and vice versa. Therefore, five NNP constraints together conduct the phase retrieval to a fast and reliable convergence to the global minimum.

    As shown in the experimental results and simulation, a broad bandwidth causes a low SNR of the detected light pattern through a turbid medium. The SNR is even lower when a dynamic object is dealt with, since the exposure time for each frame is very short. MORE with NNP not only can quickly converge to the correct images but also can improve the quality of recovered images. This work also inspires further research on imaging for grayscale objects or reflected targets[41] with spectral difference using MORE plus NNP.

    [1] L. S. McLean. Electronic Imaging in Astronomy. Detectors and Instrumentation(2008).

    [2] V. Tuchin. Tissue Optics: Light Scattering Methods and Instruments for Medical Diagnosis(2015).

    [23] A. Labeyrie. Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images. Astron. Astrophys., 6, 85(1970).

    [28] J. R. Fienup. Phase retrieval with continuous version of hybrid input-output. Frontiers in Optics, OSA Technical Digest, ThI3(2003).

    [32] T. Wu, C. Guo, X. Shao. Non-invasive imaging through thin scattering layers with broadband illumination(2018).

    Tools

    Get Citation

    Copy Citation Text

    Junyao Lei, Hui Chen, Yuan Yuan, Yunong Sun, Jianbin Liu, Huaibin Zheng, Yuchen He, "Dynamic imaging through scattering medium under white-light illumination [Invited]," Chin. Opt. Lett. 22, 060007 (2024)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Special Issue: SPECIAL ISSUE ON QUANTUM IMAGING

    Received: Dec. 22, 2023

    Accepted: Apr. 1, 2024

    Published Online: Jun. 27, 2024

    The Author Email: Hui Chen (chenhui@xjtu.edu.cn)

    DOI:10.3788/COL202422.060007

    Topics