
This cover illustrates scanning confocal imaging techniques based on photon-timestamped information. The three-dimensional time tunnel highlights the excavation of the time dimension. Discrete photons imply a limited photon budget that requires only the first ten photons to reconstruct. The three-dimensional cellular structure corresponds to the three-dimensional imaging capabilities of confocal technology. PT-Confocal will push biomicroscopy even closer to the limits of low-light imaging. See Siyuan Yin et al., pp. 021005.
Polarimetric imaging, leveraging measurements of polarimetric parameters that encode distinct physical properties, finds wide applications across diverse domains. However, some critical polarization information is highly sensitive to noise, and denoising polarimetric images while preserving polarization information remains a challenge. The development of denoising techniques for polarized images can be roughly divided into three stages: The first stage involves the direct application of traditional image denoising algorithms, such as spatial/transform domain filtering. The second stage involves specially designed methods for polarized images, using image prior models for noise removal, such as principal component analysis and K-singular value decomposition. In the third stage, benefiting from advances in deep learning, denoising methods tend to integrate polarization characteristics with deep learning models for noise suppression. The residual dense network, U-Net, and other effective models are appropriately modified and supervised/self-supervised trained to handle the denoising problem of regular/extensive polarimetric images. In this paper, we perform a comparative study of polarimetric image denoising methods. These methods are first classified as learning-based and traditional methods. Then, the motivations and principles of different types of denoising methods are analyzed. Finally, some potential challenges and directions for future research are pointed out.
Pathological examination is essential for cancer diagnosis. Frozen sectioning has been the gold standard for intraoperative tissue assessment, which, however, is hampered by its laborious processing steps and often provides inadequate tissue slide quality. To address these limitations, we developed a deep-learning-assisted, ultraviolet light-emitting diode (UV-LED) microscope for label-free and slide-free tissue imaging. Using UV-based light-sheet (UV-LS) imaging mode as the learning target, UV-LED images with high contrast are generated by employing a weakly supervised network for contrast enhancement. With our approach, the image acquisition speed for providing contrast-enhanced UV-LED (CE-LED) images is 47 s/cm2, ∼25 times faster than that of the UV-LS system. The results show that this approach significantly enhances the image quality of UV-LED, revealing essential tissue structures in cancerous samples. The resulting CE-LED offers a low-cost, nondestructive, and high-throughput alternative histological imaging technique for intraoperative cancer detection.
Considering the image (video) compression on resource-limited platforms, we propose an ultralow-cost image encoder, named block-modulating video compression (BMVC) with an extremely low-cost encoder to be implemented on mobile platforms with low consumption of power and computation resources. Accordingly, we also develop two types of BMVC decoders, implemented by deep neural networks. The first BMVC decoder is based on the plug-and-play algorithm, which is flexible with different compression ratios. The second decoder is a memory-efficient end-to-end convolutional neural network, which aims for real-time decoding. Extensive results on the high-definition images and videos demonstrate the superior performance of the proposed codec and the robustness against bit quantization.
Endoscopic imaging is crucial for minimally invasive observation of biological tissues. Notably, the integration between the graded-index (GRIN) waveguides and convolutional neural networks (CNNs) has shown promise in enhancing endoscopy quality thanks to their synergistic combination of hardware-based dispersion suppression and software-based imaging restoration. However, conventional CNNs are typically ineffective against diverse intrinsic distortions in real-life imaging systems, limiting their use in rectifying extrinsic distortions. This issue is particularly urgent in wide-spectrum GRIN endoscopes, where the random variation in their equivalent optical lengths leads to catastrophic imaging distortion. To address this problem, we propose a novel network architecture termed the classified-cascaded CNN (CC-CNN), which comprises a virtual-real discrimination network and a physical-aberration correction network, tailored to distinct physical sources under prior knowledge. The CC-CNN, by aligning its processing logic with physical reality, achieves high-fidelity intrinsic distortion correction for GRIN systems, even with limited training data. Our experiment demonstrates that complex distortions from multiple random-length GRIN systems can be effectively restored using a single CC-CNN. This research offers insights into next-generation GRIN-based endoscopic systems and highlights the untapped potential of CC-CNNs designed under the guidance of categorized physical models.
Realizing real-time and highly accurate three-dimensional (3D) imaging of dynamic scenes presents a fundamental challenge across various fields, including online monitoring and augmented reality. Currently, traditional phase-shifting profilometry (PSP) and Fourier transform profilometry (FTP) methods struggle to balance accuracy and measurement efficiency simultaneously, while deep-learning-based 3D imaging approaches lack in terms of speed and flexibility. To address these challenges, we proposed a real-time method of 3D imaging based on region of interest (ROI) fringe projection and a lightweight phase-estimation network, in which an ROI fringe projection strategy was adopted to increase the fringe period on the tested surface. A phase-estimation network (PE-Net) assisted by phase estimation was presented to ensure both phase accuracy and inference speed, and a modified heterodyne phase unwrapping method (MHPU) was used to enable flexible phase unwrapping for the final 3D imaging outputs. The experimental results demonstrate that the proposed workflow achieves 3D imaging with a speed of 100 frame/s and a root mean square (RMS) error of less than 0.031 mm, providing a real-time solution with high accuracy, efficiency, and flexibility.
Confocal microscopy, as an advanced imaging technique for increasing optical resolution and contrast, has diverse applications ranging from biomedical imaging to industrial detection. However, the focused energy on the samples would bleach fluorescent substances and damage illuminated tissues, which hinders the observation and presentation of natural processes in microscopic imaging. Here, we propose a photonic timestamped confocal microscopy (PT-Confocal) scheme to rebuild the image with limited photons per pixel. By reducing the optical flux to the single-photon level and timestamping these emission photons, we experimentally realize PT-Confocal with only the first 10 fluorescent photons. We achieve the high-quality reconstructed result by optimizing the limited photons with maximum-likelihood estimation, discrete wavelet transform, and a deep-learning algorithm. PT-Confocal treats signals as a stream of photons and utilizes timestamps carried by a small number of photons to reconstruct their spatial properties, demonstrating multi-channel and three-dimensional capacity in the majority of biological application scenarios. Our results open a new perspective in ultralow-flux confocal microscopy and pave the way for revealing inaccessible phenomena in delicate biological samples or dim life systems.