Advanced Imaging
Co-Editors-in-Chief
Xiaopeng Shao, Sylvain Gigan
Hanchu Ye, Zitong Ye, Yunbo Chen, Jinfeng Zhang, Xu Liu, Cuifang Kuang, Youhua Chen, and Wenjie Liu

Structure illumination microscopy (SIM) imposes no special requirements on the fluorescent dyes used for sample labeling, yielding resolution exceeding twice the optical diffraction limit with low phototoxicity, which is therefore very favorable for dynamic observation of live samples. However, the traditional SIM algorithm is prone to artifacts due to the high signal-to-noise ratio (SNR) requirement, and existing deep-learning SIM algorithms still have the potential to improve imaging speed. Here, we introduce a deep-learning-based video-level and high-fidelity super-resolution SIM reconstruction method, termed video-level deep-learning SIM (VDL-SIM), which has an imaging speed of up to 47 frame/s, providing a favorable observing experience for users. In addition, VDL-SIM can robustly reconstruct sample details under a low-light dose, which greatly reduces the damage to the sample during imaging. Compared with existing SIM algorithms, VDL-SIM has faster imaging speed than existing deep-learning algorithms, and higher imaging fidelity at low SNR, which is more obvious for traditional algorithms. These characteristics enable VDL-SIM to be a useful video-level super-resolution imaging alternative to conventional methods in challenging imaging conditions.

Apr. 05, 2024
  • Vol. 1 Issue 1 011001 (2024)
  • Haogong Feng, Runze Zhu, and Fei Xu

    Optical fiber bundles frequently serve as crucial components in flexible miniature endoscopes, transmitting end-to-end images directly for medical and industrial applications. Each core usually acts as a single pixel, and the resolution of the image is limited by the core size and core spacing. We propose a method that exploits the hidden information embedded in the pattern within each core to break the limitation and obtain high-dimensional light field information and more features of the original image including edges, texture, and color. Intra-core patterns are mainly related to the spatial angle of captured light rays and the shape of the core. A convolutional neural network is used to accelerate the extraction of in-core features containing the light field information of the whole scene, achieve the transformation of in-core features to real details, and enhance invisible texture features and image colorization of fiber bundle images.

    Apr. 09, 2024
  • Vol. 1 Issue 1 011002 (2024)
  • Peng-Yu Jiang, Zheng-Ping Li, Wen-Long Ye, Ziheng Qiu, Da-Jian Cui, and Feihu Xu

    The single-photon sensitivity and picosecond time resolution of single-photon light detection and ranging (LiDAR) can provide a full-waveform profile for retrieving the three-dimentional (3D) profile of the target separated from foreground clutter. This capability has made single-photon LiDAR a solution for imaging through obscurant, camouflage nets, and semitransparent materials. However, the obstructive presence of the clutter and limited pixel numbers of single-photon detector arrays still pose challenges in achieving high-quality imaging. Here, we demonstrate a single-photon array LiDAR system combined with tailored computational algorithms for high-resolution 3D imaging through camouflage nets. For static targets, we develop a 3D sub-voxel scanning approach along with a photon-efficient deconvolution algorithm. Using this approach, we demonstrate 3D imaging through camouflage nets with a 3× improvement in spatial resolution and a 7.5× improvement in depth resolution compared with the inherent system resolution. For moving targets, we propose a motion compensation algorithm to mitigate the net’s obstructive effects, achieving video-rate imaging of camouflaged scenes at 20 frame/s. More importantly, we demonstrate 3D imaging for complex scenes in various outdoor scenarios and evaluate the advanced features of single-photon LiDAR over a visible-light camera and a mid-wave infrared (MWIR) camera. The results point a way forward for high-resolution real-time 3D imaging of multi-depth scenarios.

    May. 15, 2024
  • Vol. 1 Issue 1 011003 (2024)
  • Please enter the answer below before you can view the full text.
    Submit