Advanced Photonics Nexus, Volume. 4, Issue 2, 026010(2025)
Large-scale single-pixel imaging and sensing
Fig. 1. Overview of the SPIS technique. (a) The optical setup of the SPIS technique. The structured illumination was generated using a DMD and a white-light source. A single-pixel detector was used to collect the light reflected from the target scene. The collected 1D measurements were digitized and then reshaped into 2D measurements. (b) In SPIS, we scan and sample the scene using small-sized optimized patterns, which achieve higher sampling performance with an order of magnitude fewer pattern parameters. The 2D measurements were then fed into the encoder to extract high-dimensional semantic features, and the features were sent to the task-specific plug-and-play decoder to complete large-scale SPI or image-free sensing. (c) The transformer-based encoder and UDL function can guide SPIS to pay more attention to the target area with more details in the scene, thus extracting high-dimensional semantic features that are effective for imaging and sensing. (d) The existing state-of-the-art SPI method ReconNet15 cannot reconstruct clear images at a sampling rate of 3% and a resolution of
Fig. 2. SPIS network structure. (a) Overview of the SPIS technique. (b) The transformer-based encoder. (c) The decoder for large-scale imaging and image-free segmentation. The decoder consists of multiple upsampling convolution blocks, each of which consists of multiple convolutional layers and one upsampling layer. (d) The decoder for image-free object detection.
Fig. 3. Two-step training strategy of the SPIS network. (a) The overview of the two-step training strategy for imaging and image-free segmentation. In the first stage, the network estimates both the output result and the uncertainty values. In the second stage, the uncertainty values are used to generate a spatially adaptive weight to guide the network to prioritize the pixels in the texture-rich regions and edge regions. (b) The overview of the two-step training method for image-free single-pixel object detection. (c) Ablation study of the UDL loss function. “Step 1” represents the results output by SPIS that has not been trained by UDL, and “step 2” represents the results output by SPIS that has been trained by UDL.
Fig. 4. SPI experiment. (a) The statistical results of pattern comparison, noise robustness, and ablation study for uncertainty loss functions. (b) The proof-of-concept setup for large-scale SPI and image-free single-pixel sensing. (c) The visualization results of large-scale SPI on 3D scenes at a sampling rate of 3% and a resolution of
Fig. 5. Image-free single-pixel segmentation experiment. (a) Experimental results of segmentation performance of the five methods involved in the comparison and our proposed image-free single-pixel image segmentation method at different sampling rates. (b) Visualization results of image-free single-pixel image segmentation performance comparison experiments at different sampling rates (SRs).
Fig. 6. Image-free single-pixel object detection experiment. (a) Statistic results of pattern sampling performance comparison and noise interference experiments. (b) Visualization results of image-free single-pixel object detection. The “min” and “max” represent the relative coordinates of the upper left corner and lower right corner of the target bounding box, respectively. To better demonstrate the detection results, we visualized the output of SPIS on the input scene.
Fig. 7. Sampling process of small-sized patterns. We take a
|
|
|
|
Get Citation
Copy Citation Text
Lintao Peng, Siyu Xie, Hui Lu, Liheng Bian, "Large-scale single-pixel imaging and sensing," Adv. Photon. Nexus 4, 026010 (2025)
Category: Research Articles
Received: Aug. 28, 2024
Accepted: Jan. 15, 2025
Published Online: Feb. 27, 2025
The Author Email: Liheng Bian (bian@bit.edu.cn)