Photonics Research, Volume. 13, Issue 2, 511(2025)

Lensless efficient snapshot hyperspectral imaging using dynamic phase modulation Editors' Pick

Chong Zhang1,2, Xianglei Liu3,7, Lizhi Wang4, Shining Ma1, Yuanjin Zheng5, Yue Liu1,2, Hua Huang6, Yongtian Wang1, and Weitao Song1,2、*
Author Affiliations
  • 1Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
  • 2Zhengzhou Research Institute, Beijing Institute of Technology, Zhengzhou 450000, China
  • 3State Key Laboratory of Radio Frequency Heterogeneous Integration, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China
  • 4School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
  • 5School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
  • 6School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China
  • 7e-mail: liuxiangleiinrs@gmail.com
  • show less

    Snapshot hyperspectral imaging based on a diffractive optical element (DOE) is increasingly featured in recent progress in deep optics. Despite remarkable advances in spatial and spectral resolutions, the limitations of current photolithography technology have prevented the fabricated DOE from being designed at ideal heights and with high diffraction efficiency, diminishing the effectiveness of coded imaging and reconstruction accuracy in some bands. Here, we propose, to our knowledge, a new lensless efficient snapshot hyperspectral imaging (LESHI) system that utilizes a liquid-crystal-on-silicon spatial light modulator (LCoS-SLM) to replace the traditionally fabricated DOE, resulting in high modulation levels and reconstruction accuracy. Beyond the single-lens imaging model, the system can leverage the switch ability of LCoS-SLM to implement distributed diffractive optics (DDO) imaging and enhance diffraction efficiency across the full visible spectrum. Using the proposed method, we develop a proof-of-concept prototype with an image resolution of 1920×1080 pixels, an effective spatial resolution of 41.74 μm, and a spectral resolution of 10 nm, while improving the average diffraction efficiency from 0.75 to 0.91 over the visible wavelength range (400–700 nm). Additionally, LESHI allows the focal length to be adjusted from 50 mm to 100 mm without the need for additional optical components, providing a cost-effective and time-saving solution for real-time on-site debugging. LESHI is the first imaging modality, to the best of our knowledge, to use dynamic diffractive optics and snapshot hyperspectral imaging, offering a completely new approach to computational spectral imaging and deep optics.

    1. INTRODUCTION

    Hyperspectral imaging is employed to capture multi-band spectral images by examining the reflection or radiation data from an object or scene across successive wavelengths of light. With its capacity for high spatial and spectral resolution, hyperspectral imaging has great potential in the applications of biology and medicine [13], agriculture and forestry [46], oceans and astronomy [7,8], military and defense [9], and art and cultural relics [10]. Traditional hyperspectral imaging systems acquire spectral data through diverse techniques, such as whiskbroom, pushbroom, and wavelength scanning [1113]. Although yielding accurate spectral components, these systems sacrifice imaging time or space, resulting in the inability to capture dynamic scenes in real time. Thus, researchers have conceived a range of snapshot hyperspectral imaging (SHI) systems to achieve real-time wide-field imaging spectrometers.

    SHI consists of an optical hardware encoder and a software decoder. Based on their encoding strategies, contemporary SHI systems can be classified into amplitude-encoding [1417] and phase-encoding [1822] categories. The amplitude-encoding SHI systems, typical examples like the coded aperture snapshot spectral imaging (CASSI) system and its variants, consist of front optics, a pseudorandom binary coded aperture, relay lenses, dispersive elements (e.g., grating or prism), and a focal plane array detector [14]. Among them, the pseudorandom binary pattern has a 50%-transmission ratio and is placed at the effective field stop of an imaging system [14,23,24]. While successfully retrieving spectral images through compressed measurements, they fall short in optical throughput and a bulky system [12,25]. In contrast, the phase-encoding methods manipulate the phase of incident light through a custom-designed ultrathin diffractive lens, which yields a coded diffractive image with a spectrum separation [18,21,26]. The phase modulation element in SHI typically involves a diffractive optical element (DOE) [20,2730], metasurfaces [31,32], and other nanomaterials [3335]. Among these components, DOE-based SHI stands out due to its simple-to-manufacture, compact, and ultrathin structure, and remarkable dispersion capabilities.

    The hardware encoder of DOE-based SHI introduces specific phase delays for different wavelengths by customizing the height map of the DOE. Different patterns can be used to design the height map to achieve phase modulation, such as Fresnel, cubic, multi-focal, diffractive achromat, hybrid diffractive-refractive, and square cubic [36,37]. Related research works have progressively addressed problems such as point spread function (PSF) inhomogeneity at different spectral bands, chromatic aberrations, and mismatches between design and fabrication [18,36,3840]. Still, non-negligible gaps exist between the ideal physical design and the actual practice of DOE. For instance, limitations in stabilized lithography restrict the quantization levels of DOE height maps to eight levels [39]. Furthermore, when the incident wavelength deviates from the design wavelength, the diffraction efficiency of DOE is significantly reduced [40,41]. In terms of the manufacturing of DOE, the existing deployed DOE configuration in deep optics includes a single DOE [28,39] (conducting imaging and phase modulation) and a DOE (implementing phase modulation) coupled with a simple lens (dedicated to imaging) [42]. As a result, the single DOE configuration is preferred to make the system compact.

    The software decoder in the DOE-based SHI is used to solve ill-posed inverse problems to retrieve high-fidelity spectral data cubes from the captured single measurement and various reconstruction methods include analytical-modeling-based methods [18,43] and deep-learning-based methods [12,4446]. Analytical-modeling-based algorithms adopt handcrafted priors, e.g., total variation and non-local self-similarity, to filter the solution to the desired signal space, which relies on a long iteration time and empirical parameter tuning to get optimum results [43]. The deep-learning-based methods include U-net [47] and its variants [48], RNN [49], Transformer [50], LSTM [17], and Mamba [51]. They have been proposed to achieve end-to-end high-fidelity spectral image reconstruction. However, the neglect of the low-level limitations associated with lithography techniques and the diffraction efficiencies of DOE presents practical challenges in real data reconstruction. These challenges include alignment errors and stray light from the transition area of the ring structure.

    Liquid-crystal-on-silicon spatial light modulators (LCoS-SLMs) can dynamically simulate the phase modulation of DOE to generate specific phase delay or optical range difference for different wavelengths by controlling the state of liquid crystal pixels on its surface. The LCoS-SLM supports quantization levels up to 256, allowing floating-point gray-level design schemes [52]. These features can mitigate the low-accuracy issues arising from the height limitation of less than 16th levels in the manufactured DOE. Moreover, the LCoS-SLM can dynamically load multiple DOE simulation patterns with different design wavelengths at a frame rate of 180 Hz. This feature enables higher diffraction efficiency and improves spectral recovery accuracy, meanwhile allowing for portable alteration of the focal length for imaging. Furthermore, the repeatable refresh ability of LCoS-SLM dramatically improves the efficiency and reduces the cost required for manufacturing DOEs, facilitating efficient real-time model debugging in the field. With all these resilient merits, LCoS-SLMs have been employed for phase modulation in achromatic imaging [52], super-resolution imaging [53], ultrafast imaging [54], extended depth of field enhancement [55], and computational holographic imaging [56,57], but little research has been conducted on DOE-based SHI.

    To bridge this gap, we propose a new lensless efficient SHI (LESHI) aided by an LCoS-SLM. LESHI utilizes an LCoS-SLM to replace a single fabricated DOE as the hardware encoder, simultaneously realizing imaging and phase modulation. For the software decoding process, we developed a learning algorithm based on the ResU-net architecture, taking into account the sensor’s response function and the diffraction efficiency of the DOE. Using the developed algorithm, high-resolution 31-channel spectral images can be reconstructed from the captured three-channel red-green-blue (RGB) image. To improve diffraction efficiency with a single simulated DOE and explore the switch ability of LCoS-SLM, we propose a distributed diffractive optics (DDO) model by dynamically controlling the light phase. Thus, multiple different phase modulation patterns can be loaded onto the LCoS-SLM, resulting in high reconstruction accuracy and high diffraction efficiency throughout the full visible spectral range (400–700 nm). Furthermore, the LESHI system can realize the modification of the focal length and the field of view without adding other optical components, demonstrating the feature of tunability. The entire imaging system adopts an end-to-end approach to modeling, training, and optimizing, ensuring a high level of integration and coordination to achieve optimal performance. In a nutshell, LESHI not only solves the errors between high-level DOE design and fabrication, as well as the optical alignment difficulties during assembly, but also leverages multiple simulated DOEs for imaging in different spectral bands, thus improving the diffraction efficiency and spectral reconstruction accuracy in the entire visible spectrum. At the same time, it enables convenient modification of focus lengths and real-time on-site debugging, greatly diminishing the production cost and time of DOE. Extensive comprehensive simulations and real-world hardware experiments validate the superior performance of the system.

    2. RESULTS

    A. Operating Principle of LESHI

    The schematic of the LESHI system is shown in Fig. 1. A light source (CIE standard illuminant D65, Datacolor Tru-Vue light booth) is used to illuminate the object. The reflected light of the sample passes through the polarizer (GCL-050003), is reflected by a beam splitter (GCC-M402103), and impinges on the LCoS-SLM (FSLM-2K39-P02, 8-bit grayscale level of 256 steps, 180-Hz refresh rate) loaded with optimized DOE patterns. Since the liquid crystal layer has different refractive indices for different wavelengths of the spectrum [52,53], it can produce different phase delays for the entire spectrum like DOE, splitting the continuous hyperspectral data cube. Thus, when a light wave passes through the liquid crystal layer of the LCoS-SLM, the modulation of each pixel causes the phase of the light wave to change. Finally, the phase-modulated light reflected from the LCoS-SLM transmits the beam splitter and is recorded by a color CMOS camera (ME2P-1230-23U3C, which contains a Bayer filter).

    Schematic of the lensless efficient snapshot hyperspectral imaging (LESHI) system. LCoS-SLM, liquid crystal on silicon-based spatial light modulator. LESHI comprises hardware-based diffractive imaging and software-based hyperspectral reconstruction algorithms. The diffractive imaging component includes an LCoS-SLM, a polarizer, a beam splitter, and a color CMOS camera. The hyperspectral reconstruction algorithm employs a ResU-net to decode the spectral information.

    Figure 1.Schematic of the lensless efficient snapshot hyperspectral imaging (LESHI) system. LCoS-SLM, liquid crystal on silicon-based spatial light modulator. LESHI comprises hardware-based diffractive imaging and software-based hyperspectral reconstruction algorithms. The diffractive imaging component includes an LCoS-SLM, a polarizer, a beam splitter, and a color CMOS camera. The hyperspectral reconstruction algorithm employs a ResU-net to decode the spectral information.

    The working principle of LESHI is illustrated in Fig. 2(a). In the forward propagation of the model, LESHI sequentially performs the compression of the spectral dataset into a three-channel RGB snapshot, the image reconstruction of the 31-channel spectral cube from the snapshot, and the calculation of the loss function between the reconstruction results and the ground truth. In backward propagation of the model, the model optimizes its variables (e.g., the values of each pixel of the phase modulation pattern and parameters in the neural networks) by minimizing the loss function using the gradient descent methods. Notably, we take the diffraction efficiencies into account in the model, which is missed in existing learning methods [18,22,2730]. Besides, a rotationally symmetric design [28] was used to reduce the computational complexity of the phase delay pattern.

    Working principle of LESHI. (a) Pipeline of LESHI. nz denotes the number of spectral channels from λ0 to λn. η denotes sensor noise. * denotes the convolution operator. ∂∂P and ∂∂yin denote the derivative of the imaging model with respect to PSF and the derivative of the reconstructed network with respect to the captured image, respectively. Lh and Lde denote loss of hyperspectral image reconstruction and loss of diffraction efficiency, respectively. ||W||22 denotes the square of norm L2 and W denotes the network weights; β are scale constants set to 10−4. (b) Schematic of PSF acquisition process in diffractive optical imaging based on LCoS-SLM with DOE patterns. I0(x,y;λ) denotes input scene and Ic(x,y;λ) is its convolution result with PSF, P(x,y;λ). (c) DDO model design based on LCoS-SLM. DDO fuses the PSFs of individual DOEs of the different bands and adds the model of the diffraction efficiency to form a degenerate PSF model. (d) Structure of the ResU-net reconstruction algorithm, which combines the U-shaped architecture of U-net with the residual connections of ResNet.

    Figure 2.Working principle of LESHI. (a) Pipeline of LESHI. nz denotes the number of spectral channels from λ0 to λn. η denotes sensor noise. * denotes the convolution operator. P and yin denote the derivative of the imaging model with respect to PSF and the derivative of the reconstructed network with respect to the captured image, respectively. Lh and Lde denote loss of hyperspectral image reconstruction and loss of diffraction efficiency, respectively. ||W||22 denotes the square of norm L2 and W denotes the network weights; β are scale constants set to 104. (b) Schematic of PSF acquisition process in diffractive optical imaging based on LCoS-SLM with DOE patterns. I0(x,y;λ) denotes input scene and Ic(x,y;λ) is its convolution result with PSF, P(x,y;λ). (c) DDO model design based on LCoS-SLM. DDO fuses the PSFs of individual DOEs of the different bands and adds the model of the diffraction efficiency to form a degenerate PSF model. (d) Structure of the ResU-net reconstruction algorithm, which combines the U-shaped architecture of U-net with the residual connections of ResNet.

    Figure 2(b) shows the imaging process of the LESHI system with a representative PSF (details in Fig. 7 of Appendix A). A spectral dataset in the visible band with 31 channels and 10-nm spectral resolution convolves the PSF and yields the snapshot. The forward mode of LESHI is expressed as Ic{R,G,B}(x,y;λ)=λaλb[P(x,y;λ)*I0(x,y;λ)]Rcdλ+η.Here, P(x,y;λ) denotes the PSF. I0(x,y;λ) denotes the spectral image in each channel. * denotes a 2D convolution operator. Rc(λ) denotes the spectral response function of each channel. η is the Gaussian noise. λa and λb are the minimum and maximum wavelengths, respectively. Ic{R,G,B} denotes the snapshot (details in Appendix A).

    To improve the diffraction efficiency and account for noise effects on the quality of the reconstructed images, the ideal PSF without diffraction efficiency can be transformed into the first-order degenerate PSF (D-PSF). The D-PSF provides a more accurate representation of the imaging system and is expressed as P(x,y;λ)=γ(λ)Pideal+[1γ(λ)]PBN,where Pideal is the ideal PSF of the diffraction imaging model. PBN denotes Gaussian noise, and the subscript “BN” denotes background noise. γ(λ) denotes diffraction efficiency. The derivation for the diffraction efficiency is shown in Appendix B. Figure 2(c) illustrates the combination of D-PSF, which is described in the DDO model in Appendix C.

    After the data acquisition, the captured images are used as input to a customized ResU-net [Fig. 2(d), details in Appendix D], which can retrieve 31-channel spectral images. The loss function L consists of three parts including the reconstruction loss, the diffraction efficiency loss, and the L2 regularization on the network weights: L=1K||I˜I0||1+α1J||PPideal||22+β||W||22.Here, α and β are scale constants set to 103 and 104, respectively. ||I˜I0||1 denotes the mean absolute error between the reconstructed hyperspectral image I˜ and the ground truth I0, and K denotes the pixel count of the image. ||PPideal||22 denotes the square of norm L2. P denotes D-PSF. J indicates the pixel count of the LCoS-SLM. ||W||22 denotes the square of norm L2, and W denotes the network weights.

    B. Validation of the LESHI Model

    To verify the LESHI model, we conducted a comprehensive simulation using the ICVL dataset [58], and it consists of 201 spectral scenes, randomly distributed for training (160 scenes), validation (21 scenes), and testing (20 scenes). To match the hyperparameters of the model, each scene with the size of 1930×1300 was cropped into nine overlapped slices, each of which has a size of 512×512. The scene was placed 1.5 m away from the LCoS-SLM, and the LCoS-SLM was positioned 70 mm away from the sensor plane. The LCoS-SLM had a pixel pitch of 4.5  μm×4.5  μm, a pixel count of 1024×1024, and an 8-bit gray level of 256. The color CMOS camera had the same pixel count and pitch size as that of the LCoS-SLM.

    We conducted a simulation to generate the PSFs by setting the parameters of diffraction imaging and camera spectral response functions. Figure 3(a) shows the ground truth in the test set. We systematically simulated the phase modulation patterns. Specifically, LESHI employs an end-to-end optimization approach to generate a phase modulation pattern loaded onto the LCoS-SLM. The resulting pattern, as shown in Fig. 3(b), is a grayscale pattern with 8-bit and 256-level precision. By adjusting the gray level of each pixel on the liquid crystal, the phase delay magnitudes can be modified across various spectra. The inconsistent diffracted spot sizes in different channels due to the varying degree of phase modulation of the spectra by the system result in a white haze covering the captured RGB images. Figure 3(c) displays the simulated captured images by the color CMOS camera, providing a visual representation of this white haze phenomenon. The customized ResU-net network takes the snapshot as input and reconstructs 31-channel hyperspectral images. Figure 3(d) visually represents the reconstructed spectral image in RGB. Figure 3(e) shows the reconstructed 31-channel spectral images using a single LCoS-SLM loaded with a single simulated DOE, colored with the RGB values of the corresponding wavelengths. In addition, we validated the diffraction efficiency effectiveness of the DDO model.

    Validation of LESHI model. (a) Ground truth from the ICVL dataset. (b) The trained simulated DOE pattern loaded on the LCoS-SLM. (c) RGB image generated by the LESHI model with a single DOE pattern. (d) Reconstructed result of (c). (e) Reconstructed hyperspectral images using LESHI model with a single DOE pattern. (f) Ground truth and reconstructed values of the spectral radiance curves for local area “1” marked in (a). (g) Same as (f) but for local area “2”. (h) Diffraction efficiency as a function of wavelength, using single DOE pattern (LCoS-S) and multiple DOE patterns (LCoS-D) in the LESHI model. The table shows the relative diffraction efficiency gain (RDEG) of LCoS-D compared to LCoS-S at three different bands (400–500 nm, 500–600 nm, 600–700 nm).

    Figure 3.Validation of LESHI model. (a) Ground truth from the ICVL dataset. (b) The trained simulated DOE pattern loaded on the LCoS-SLM. (c) RGB image generated by the LESHI model with a single DOE pattern. (d) Reconstructed result of (c). (e) Reconstructed hyperspectral images using LESHI model with a single DOE pattern. (f) Ground truth and reconstructed values of the spectral radiance curves for local area “1” marked in (a). (g) Same as (f) but for local area “2”. (h) Diffraction efficiency as a function of wavelength, using single DOE pattern (LCoS-S) and multiple DOE patterns (LCoS-D) in the LESHI model. The table shows the relative diffraction efficiency gain (RDEG) of LCoS-D compared to LCoS-S at three different bands (400–500 nm, 500–600 nm, 600–700 nm).

    To verify the accuracy of the models for spectral reconstruction, we compared the average spectral radiance of the reconstructed and true spectral images. Two 4×4-pixel regions [marked by white boxes with numbers “1” and “2” in Fig. 3(a)] were randomly selected in the scene. Their average spectral radiances of reconstruction result and ground truth as a function of wavelength are shown in Figs. 3(f) and 3(g) for regions “1” and “2”, respectively. The difference between the reconstructed images and their ground truth is manifested in the shaded area in Figs. 3(f) and 3(g) with an average value of 3.54% for region “1” and 3.21% for region “2”. LESHI retrieves the changing trend across spectral range well. We compared the diffraction efficiencies of a single DOE pattern [marked by LCOS-S in Fig. 3(h)] and DDO-based DOE patterns [marked by LCOS-D in Fig. 3(h)] at different wavelength bands. The result shows LCoS-D has an average diffraction efficiency of 0.91, greater than 0.75 of LCoS-S. The inserted table in Fig. 3(h) shows the relative diffraction efficiency gain (RDEG) of LCoS-D compared to LCoS-S, quantifying the diffraction efficiency performance improvement rate of LCoS-D at three different bands (400–500 nm, 500–600 nm, and 600–700 nm). To investigate the impact of different levels of DOE patterns on spectral reconstruction, we examined four different levels (i.e., 4, 16, 64, and 256) in LESHI. The corresponding results are shown in Appendix E. Furthermore, we elaborated on the advantages of LCoS-D models over the fabricated DOE and LCoS-S in terms of spectral reconstruction quality in Appendix F. LESHI, when compared to other SHI modalities such as CASSI [14] and Fresnel lenses [36], stands out due to its exceptional performance in compact system design and high spectral reconstruction quality (details are given in Appendix G).

    C. Quantification of the System’s Performance of LESHI

    Upon the LESHI model, we built the LESHI system. To characterize the spatial resolution of the LESHI system, the resolution test chart of ISO12233 (3nh, SIQ) was used. The distance between the resolution test chart and the LCoS-SLM was 1.2 m, the screen ratio of the test chart was 4:3, and the focal length of LESHI was set to 50 mm. Moreover, we mitigate the effect of multiple orders by adding a polarizer in front of the LCoS-SLM and increasing its phase quantization level to the highest level (256-level). Figure 4(a) shows the reconstructed resolution test chart, which preserves lots of low- and high-frequency information about the chart. Figures 4(b) and 4(c) plot the reconstructed intensity profiles of the two groups of lines at different locations [marked by light orange and teal boxes in Fig. 4(a)] on the resolution target against the ground truth intensity profiles. With the Rayleigh resolution criterion, the effective spatial resolution of the LESHI system was characterized as 15.74 μm.

    Characterization of the LESHI system performance. (a) Reconstructed image of ISO12233 test chart. (b) Spatial line profiles of two regions on the test chart, highlighted in light orange and teal boxes at the location of label 1 in (a). (c) Spatial line profiles of two regions on the test chart, highlighted in light blue and teal boxes at the location of label 2 in (a). (d) Measurement of the LEHSI system. (e) Reconstruction result of (c) in RGB format. (f) Root mean square error (RMSE) and maximum error of reconstructed image and measurement by the CS-2000 spectrometer at six local regions [marked by white boxes in (c)]. (g) Reconstruction radiance curves of six local regions [marked by white boxes in (c)] as a function of wavelength. Ground truth is obtained by the CS-2000 spectrometer. (h) Seven representative reconstructed spectral channels of (d).

    Figure 4.Characterization of the LESHI system performance. (a) Reconstructed image of ISO12233 test chart. (b) Spatial line profiles of two regions on the test chart, highlighted in light orange and teal boxes at the location of label 1 in (a). (c) Spatial line profiles of two regions on the test chart, highlighted in light blue and teal boxes at the location of label 2 in (a). (d) Measurement of the LEHSI system. (e) Reconstruction result of (c) in RGB format. (f) Root mean square error (RMSE) and maximum error of reconstructed image and measurement by the CS-2000 spectrometer at six local regions [marked by white boxes in (c)]. (g) Reconstruction radiance curves of six local regions [marked by white boxes in (c)] as a function of wavelength. Ground truth is obtained by the CS-2000 spectrometer. (h) Seven representative reconstructed spectral channels of (d).

    The spectral resolution of the LESHI system was evaluated by comparing the spectral values obtained from the spectrometer capturing a ColorChecker Digital SG with the reconstruction results. The measurement of the color calibrator using the LESHI system is shown in Fig. 4(d). Figure 4(e) shows the reconstructed 31-channel spectral composite image in RGB form. Figure 4(f) shows the root mean square error (RMSE, left y-axis) and maximum single-spectrum-channel error (right y-axis) of the real and reconstructed values of the spectral luminance measurement points at six different locations [marked by the white dashed boxes with letters “ABCDEF” in Fig. 4(c)]. Figure 4(g) shows the results of reconstructed values of the spectral luminance measurement points at these six different locations. The reconstruction data demonstrate that the LESHI system’s spectral reconstruction at six distinct locations exhibits minimal error when compared to the actual values. Figure 4(h) showcases the seven representative reconstructed spectral channels of the color calibrator. These 10-nm spectral channels have center wavelengths at 410 nm, 450 nm, 490 nm, 530 nm, 570 nm, 630 nm, and 680 nm. All reconstructed spectral channels are shown in Visualization 1.

    D. Demonstration of Distributed Diffractive Optical Model

    To demonstrate the feasibility of applying the DDO model to LESHI, a Thorlabs’ Lab Snacks box is used as the test sample. First, we loaded the three different designed DOE patterns sequentially onto the LCoS-SLM and captured the corresponding RGB images. Second, we extracted the R, G, and B channels from the three captured images. Third, the selected R, G, and B channels with the highest diffraction efficiencies were combined. Finally, the newly synthesized RGB image was fed into the reconstruction network to retrieve 31-channel spectral images. Figure 5(a) shows the measured RGB image using a single DOE pattern (LCoS-S) and the seven representative reconstructed channels (center wavelengths at 410 nm, 450 nm, 490 nm, 530 nm, 570 nm, 630 nm, and 680 nm). Figure 5(b) is the same as Fig. 5(a) except using multiple simulated DOE patterns (LCoS-D). The comparison results show that the DDO-model-based reconstruction results are better than those of the single DOE pattern. All reconstructed spectral images generated by LCoS-S and LCoS-D are shown in Visualization 2.

    Demonstration of distributed diffractive optics (DDO) imaging. (a) Captured and reconstructed images based on a single simulation of DOE. (b) Captured and reconstruction images based on multiple simulated DOEs (DDO model). (c) Reconstructed values and ground truth of spectral radiance based on LCoS-S and LCoS-D models at the location of label 1 in (a). (d) Reconstructed values and ground truth of spectral radiance based on LCoS-S and LCoS-D models at the location of label 2 in (a). (e) Images and simulated diffraction efficiency (DE) of the R, G, and B channels captured by the model based on LCoS-S and LCoS-D.

    Figure 5.Demonstration of distributed diffractive optics (DDO) imaging. (a) Captured and reconstructed images based on a single simulation of DOE. (b) Captured and reconstruction images based on multiple simulated DOEs (DDO model). (c) Reconstructed values and ground truth of spectral radiance based on LCoS-S and LCoS-D models at the location of label 1 in (a). (d) Reconstructed values and ground truth of spectral radiance based on LCoS-S and LCoS-D models at the location of label 2 in (a). (e) Images and simulated diffraction efficiency (DE) of the R, G, and B channels captured by the model based on LCoS-S and LCoS-D.

    To quantitively analyze the reconstruction result, we measured the spectral radiance of two local areas [10×10 pixels, marked by white boxes with numbers “1” and “2” in Fig. 5(a)] in the scene using a spectrometer (CS-2000). Figures 5(c) and 5(d) show the comparison results of the spectral radiance between the measured and the reconstructed values of LCoS-S and LCoS-D at the local areas “1” and “2”, respectively. The error (right y-axis) calculated by the subtraction between the reconstructed results and their ground truth is shown as the shaded area in Figs. 5(c) and 5(d). The average error of LCoS-D is 1.84%, smaller than 4.27% of that of LCoS-S. Besides, the reconstruction results show that both LCoS-S and LCoS-D have retrieved the spectral change trend well, compared with the ground truth. LCoS-D has a better performance at 500–700 nm than LCoS-S. This phenomenon indicates that LCoS-S suffers from degradation of reconstruction accuracy due to diffraction inefficiency in some off-center bands, but the LCoS-D model can overcome this issue. The R, G, and B images of the same scene were captured under the LCoS-S [upper panel in Fig. 5(e)] and LCoS-D [bottom panel in Fig. 5(e)] models. The results reveal that the LCoS-D images are more blurred (white haze effect) than those of LCoS-S. This phenomenon can be attributed to the differing diffraction efficiencies across the R, G, and B channels. Higher diffraction efficiencies result in a larger spread of the PSF in each channel, contributing to an increased degree of point scattering in the captured images.

    E. Application of Range Sensing via a Tunable Focal Length

    The tunable and convenient focal length of the LESHI system enables it to meet the different needs of the imaging field of view and the range of the captured scene. The focal length of the LESHI system can be modified by loading DOE patterns with different focal lengths. First, we trained the patterns of DOEs with focal lengths ranging from 50 mm to 100 mm, with a step of 2 mm, and a total of 25 images. Six representative DOE patterns (with focal lengths of 50 mm, 60 mm, 70 mm, 80 mm, 90 mm, and 100 mm) are shown in Fig. 6(a). Second, each of the well-trained patterns was loaded on the LCoS-SLM, and the CMOS camera was moved to the corresponding position according to the focal length of the used pattern. Using the captured RGB images [Fig. 6(b)] under different focal lengths as the input of the well-trained neural network, the corresponding reconstructed spectral images can be retrieved with high-fidelity image quality [Fig. 6(c)]. The results show that the field of view of the scene shrinks as the focal length increases, which can be explained by the Lagrange-Helmholtz invariant (i.e., a bigger focal length gives a smaller aperture angle in image space and thus a smaller object height). Figure 6(d) shows one representative reconstructed spectral image under these focal lengths. The reconstructed spectral images under the focal length range (50–100 mm) are shown in Visualization 3. Notably, no more optical elements are modified or added in the LESHI system during focal length changes, which dramatically reduces the complexity of the zoom optical system.

    Application results for focal length modification. (a) Phase modulation patterns loaded onto LCoS-SLM with different focal lengths by end-to-end training. (b) Corresponding captured RGB images of (a). (c) Results of spectral image recovery by applying the LESHI system at different focal lengths. (d) Six representative reconstructed spectral channels corresponding to (c).

    Figure 6.Application results for focal length modification. (a) Phase modulation patterns loaded onto LCoS-SLM with different focal lengths by end-to-end training. (b) Corresponding captured RGB images of (a). (c) Results of spectral image recovery by applying the LESHI system at different focal lengths. (d) Six representative reconstructed spectral channels corresponding to (c).

    3. DISCUSSION AND CONCLUSIONS

    We have developed the LESHI system based on diffractive optics via the LCoS-SLM. LESHI employs a learning-based DOE pattern loaded onto the LCoS-SLM to perform phase modulation and imaging, instead of a physically fabricated DOE. Using the customized ResU-net algorithm, we have retrieved the 31-channel spectral cube with an image resolution of 1920×1080 pixels, an effective spatial resolution of 41.74 μm, and a spectral resolution of 10 nm across 400–700 nm from the color image captured by the CMOS camera. The comprehensive process of wavefront modulation, imaging, and spectral reconstruction is achieved through an end-to-end design approach. This approach combines a diffraction imaging optical model with a deep-learning-based reconstruction algorithm. By doing so, it optimizes the appropriate phase modulation profile. We have proposed the DDO imaging model that utilizes the dynamic refresh ability of LCoS-SLM to load multiple DOE patterns applicable to different bands for specific phase modulation and imaging. This model improves average diffraction efficiency from 0.75 to 0.91 across the entire spectral band. Meanwhile, the diffraction efficiency is considered in the PSF model to generate a new D-PSF to optimize the PSF in real scenes. Furthermore, the LESHI system has a real-time zoom function. We can load the trained patterns with different focal lengths onto the LCoS-SLM to modify the focal length and field of view without adding other optical components. The focal length and field of view are tunable in the model training. Many simulations and practical experiments demonstrate the superiority of the method in spectral image reconstruction.

    Compared to diffractive hyperspectral imaging via a fabricated DOE, the LESHI system has significant advantages in terms of spectral reconstruction accuracy, system flexibility, diffraction efficiency, and cost of fabrication. The limitation of stabilized lithography technology restricts the number of quantization levels supported by fabricated DOE to only eight. This reduction in quantization levels results in a decrease in the resolution of spectral phase modulation by DOE. Consequently, the accuracy of the reconstruction capability of the entire system is weakened. The LCoS-SLM technology offers a phase modulation level of 256 gray levels, allowing for a floating-point gray level design. This feature enables higher phase resolution, which is beneficial for optimizing and replacing fabricated DOE. The high diffraction efficiency of the fabricated DOE is challenging to maintain across the entire 400–700 nm band due to limitations in material and design wavelength. The LESHI system employs the DDO model to dynamically load multiple phase modulation patterns for different spectral bands. This implementation enhances the diffraction efficiency of imaging. Besides, the high cost of DOE fabrication significantly restricts its potential applications. By dynamically loading patterns, LCoS-SLM can save the time and cost of fabricating DOEs, improving the efficiency of real-time system debugging. In addition, the micrometer-scale level presents practical challenges when attempting to achieve pixel-level alignment for DOE. This can result in calibration errors between the idealized camera model and the actual experiment. In contrast, the pattern loaded on the LCoS-SLM has pixel-level translation, rotation, and grayscale flipping, which mitigates the difficulty of optical alignment in practical assembly.

    The principle of LESHI could be extended to other DOE-based imaging modalities. The LCoS-SLM can simulate DOE based on various patterns using high-level encoding and reloadable features, thereby improving the performance and efficiency of existing fabricated-DOE-based systems such as full-spectrum computational imaging [18], high-dynamic-range imaging [30], depth-spectral imaging [27], and achromatic extended depth of field and super-resolution imaging [36]. Besides, with an ultrashort chirped pulse as a light source, LESHI could be directly applied to ultrafast imaging [59] because the reconstructed spectral frames of LESHI can be linked to time information benefiting from the chirped pulse (i.e., the wavelength changes during the duration of the pulse).

    While the proposed distributed LESHI system improves the spectral imaging performance of the scene, the current model is based on the training of one dataset, which limits its ability to generalize to scenes in wide applications. In the future, the system will be comprehensively optimized by adding the required scene object information to the model training to improve the generalization ability of the model. In addition, deep unfolding networks [60] and plug-and-play mechanisms [61] will be considered to improve the flexibility of the network structure in handling different sizes of spectral cubes. Finally, the entire network model can be miniaturized by optimizing the network parameters, and the trained model can be loaded using FPGA hardware instead of GPU to improve the reconstruction speed of the spectrum.

    Acknowledgment

    Acknowledgment. The data table of LCoS-SLM spectral phase delay at different center wavelengths was provided by Xi’an CAS Microstar Optoelectronic Technology Co., Ltd.

    APPENDIX A: DERIVATION OF THE LESHI MODEL

    The traditional diffractive optical imaging model [28,29,39] is the foundation of snapshot hyperspectral imaging based on LCoS-SLM. This section describes the imaging process, where a point light source from the field of view first passes through a polarizer, followed by refraction and phase delay on the LCoS-SLM, and then propagates to the bare RGB sensor.

    The PSF elucidates a mathematical model [26] of the image generated by a point source as it traverses the complete imaging system. The sequential propagation of the wavefield emitted from point source P is illustrated, as it passes through the polarizer, beam splitter BS, and LCoS-SLM, ultimately reaching the sensor for imaging.

    Assume that the complex amplitude of the point source P with wavelength λ is Ui, and LCoS-SLM is located at a distance d (dλ) from P. According to the principle of spherical wave propagation in free space, the wave field U0 at the location (x,y) of the LCoS-SLM incidence plane can be formulated as U0(x,y,d;λ)=Uiei2πλrr,where r is the distance between any point (x,y,d) on the simulated DOE plane of the LCoS-SLM and P(x0,y0): r=(xx0)2+(yy0)2+d2.

    When the wavefield U0 reaches the LCoS-SLM, it generates a phase delay Δφ(x,y;λ). Thus, the reflected wavefield U1(x,y,0;λ) from the LCoS-SLM can be formulated as U1(x,y,0;λ)=U0(x,y,d;λ)eikΔφ(x,y;λ).

    The phase delay Δφ(x,y;λ) is determined from the liquid crystal thickness dlc(x,y) as Δφ(x,y;λ)=2πλdlc(x,y)[ne(θ)no],where no and ne(θ) denote the refractive index of the incident “o” light and “e” light at wavelength λ, respectively, θ is the angle between the wave vector direction and the pointing vector direction of the liquid crystal molecule, and dlc(x,y) denotes the thickness of the liquid crystal at pixel (x,y).

    The Fresnel diffraction occurring from U1(x,y,z;λ) to U2(x,y,z;λ) can be calculated by the transfer function of the angular spectrum propagation when the wavefield reaches the sensor plane at depth z: U2(x,y,z;λ)=F1{F{U1(x,y,z;λ)}H(fx,fy,z;λ)},where fx and fy are the spatial frequency of x and y, respectively. F{·} denotes the Fourier transform and F1{·} is its inverse operator. The transfer function H(fx,fy;z;λ) equation is as follows [26]: H(fx,fy;z;λ)=exp[j2πzλ1(λfx)2(λfy)2].

    The ideal power density Pideal(x,y;λ) of the PSF is proportional to the intensity I(x,y,z;λ), and I(x,y,0;λ) is the intensity of the squared value of the wave field U2(x,y,z;λ), so that Pideal(x,y;λ) is formulated as follows: Pideal(x,y;λ)=|U2(x,y,z;λ)|2.

    Figure 7 shows the PSF reconstruction results of the 31 channels at 400–700 nm. Based on the visualization results, the system can recover all 31 channels of the spectral cube to a distinguishable level. The intensity and diffracted area of the PSF for different channels are different, reflecting the fact that the system has different degrees of imaging effects for different spectral bands of light.

    According to the principle of incoherent optical imaging, the diffractive imaging process is modeled as the convolution of the original image I0(x,y,λ) and the system PSF; therefore, the modulated aberration operator is defined as I(x,y;λ)=P(x,y;λ)*I0(x,y,λ),where * denotes 2D convolution.

    LESHI-based point spread function for 31 channels at 400–700 nm. Due to the phase delay of LCoS-SLM for different spectra, the system has different point spread functions for different bands.

    Figure 7.LESHI-based point spread function for 31 channels at 400–700 nm. Due to the phase delay of LCoS-SLM for different spectra, the system has different point spread functions for different bands.

    Spectral response and modulation simulation curves of camera and LCoS-SLM. (a) Sensor spectral response curves. (b) Phase modulation curves of LCoS-SLM with different center wavelengths. (c) Diffraction efficiency of LCoS-SLM with different center wavelengths.

    Figure 8.Spectral response and modulation simulation curves of camera and LCoS-SLM. (a) Sensor spectral response curves. (b) Phase modulation curves of LCoS-SLM with different center wavelengths. (c) Diffraction efficiency of LCoS-SLM with different center wavelengths.

    APPENDIX B: DEFINITION OF DIFFRACTION EFFICIENCY

    Diffraction efficiency is a crucial metric for assessing the imaging capability of diffractive optical elements. It plays a significant role in determining the spectral range of spectral imaging. By measuring the diffraction efficiency, one can evaluate the effectiveness of these elements in diffraction light and produce high-quality images. This metric provides valuable insights into the performance and potential applications of diffractive optical elements in various fields, such as microscopy, spectroscopy, and remote sensing. The diffraction efficiency of a single-layer diffractive element can be expressed as follows [38]: γm(λ)=sinc2[mλ0λ×n(λ)1n(λ0)1].Here, sinc(·) denotes the sinc function or sampling function. m is the diffraction order, λ0 is the central wavelength, λ is the incident wavelength, and n(λ) and n(λ0) are the refractive indices of the substrate material at the incident wavelength and central wavelength, respectively.

    The LCoS-SLM system, which is typically used for phase modulation at a single wavelength, is being utilized in this case to modulate the spectrum across the full visible spectral band, which ranges from 400 to 700 nm. Therefore, we simulate the phase modulation values of LCoS-SLM in the full spectral bands. Figure 8(b) demonstrates the phase modulation results of LCoS-SLM for the full band at different center wavelengths. In addition, we also calculated the diffraction efficiency at different center wavelengths and simulated the diffraction efficiency of the distributed diffractive optics (DDO) model based on the loaded phase modulation pattern at several different center wavelengths, with the specific parameters shown in Fig. 8(c).

    APPENDIX C: DISTRIBUTED DIFFRACTIVE OPTICAL IMAGING

    The distributed diffractive optics model employs spatio-temporal multiplexing to perform distributed imaging of the same scene at different spectrum bands by sequentially loading multiple DOEs. This system utilizes LCoS-SLM to load multiple simulated DOE patterns to realize the DDO model in batches. As shown in Fig. 2(c), three simulated DOE patterns are imaged for 400–500 nm, 500–600 nm, and 600–700 nm bands. These patterns share the same focal length and therefore appear identical at a macroscopic level, but they exhibit variations in pixel-level data. Due to the quantum efficiency of the three channels R, G, and B of the CMOS sensor primarily distributed across the above three bands, each single-channel image captured by the RGB sensor can be used as an approximate substitute for the imaging results of each DOE in the corresponding band. To improve the simulation accuracy of the real PSF, the diffraction efficiency of the DDO model is integrated into the PSF to form the D-PSF [see Eq. (2)]. The D-PSF of a real diffractive optics imaging system consists of two components, the imaging component PSF from the main diffractive stage and the background noise component consisting of stray light PBN from the other diffractive stages. The main effect of PBN is to introduce background radiation into the image, leading to a reduction in contrast.

    DDO uses an LCoS-SLM to dynamically load the grayscale patterns of three simulated DOEs at different central wavelengths and image the same field of view. Then, the images of different simulated DOEs are extracted from the corresponding high-diffraction-efficiency bands to synthesize the final captured image. Therefore, the final image acquired on the sensor by using the distributed diffraction imaging model consists of three different parts, specifically as follows: I(x,y;λ)={PR(x,y;λ)IR(x,y;λ),  λ[400,500],PG(x,y;λ)IG(x,y;λ),  λ(500,600],PB(x,y;λ)IB(x,y;λ),  λ(600,700],where PR(x,y;λ), PG(x,y;λ), and PB(x,y;λ) denote respectively the D-PSF corresponding to the three different simulated DOEs in the R, G, and B bands. IR(x,y;λ), IG(x,y;λ), IB(x,y;λ) denote respectively the images of the R, G, B channels captured by the three simulated DOEs on the sensor.

    APPENDIX D: SPECTRAL RECONSTRUCTION NETWORK

    RGB images encoded by hyperspectral cubes require an image parser to reconstruct the hyperspectral image. LESHI uses ResU-net as a computational decoder for spectral reconstruction. As shown in Fig. 2(d), ResU-net consists of three parts, including the encoder, the decoder, and the residual linkage. The architecture of ResU-net features six layers of residual convolution blocks for both downsampling and upsampling, with a middle layer connecting the two stages. Each layer uses exponential ELU as excitation functions and finally adds an extra convolutional layer with a Sigmoid activation function to normalize the output 31-channel hyperspectral image. ResU-net is a variant of U-net that uses a residual network and performs multiscale operations on the image, making it suitable for large-blur image restoration of scenes.

    APPENDIX E: INVESTIGATION OF DOE PATTERN WITH DIFFERENT LEVELS

    To verify the effect of level of DOE on the accuracy of spectral reconstruction, we simulated a set of hyperspectral imaging models with different-level patterns onto LCoS-SLM. Figure 9 presents the visual results of reconstruction based on the simulated DOE with the phase modulation levels of 4, 16, 64, and 256. The visualization results demonstrate that the level number of DOE is positively related to the reconstruction accuracy. As the number of steps increases, the noise of the reconstructed spectral image decreases and the sharpness increases.

    The effect of different levels of the simulated DOE for spectral reconstruction. Comparing the reconstruction performance for 4, 16, 64, and 256 levels, it can be concluded that the reconstruction performance gradually improves with the growth of levels.

    Figure 9.The effect of different levels of the simulated DOE for spectral reconstruction. Comparing the reconstruction performance for 4, 16, 64, and 256 levels, it can be concluded that the reconstruction performance gradually improves with the growth of levels.

    APPENDIX F: COMPARISON OF FABRICATED DOE, SINGLE DOE PATTERN, AND MULTIPLE DOE PATTERNS IN SNAPSHOT HYPERSPECTRAL IMAGING

    The proposed LESHI was verified by comparing the reconstruction results of three different hyperspectral imaging systems: fabricated DOE, LCoS-S, and LCoS-D. The fabricated DOE system is based on a fabricated DOE, the LCoS-S system uses a single phase-modulated pattern loaded on LCoS-SLM, and the LCoS-D system utilizes a DDO model loaded with multiple patterns. The quantization level of the height map of the fabricated DOE can range from zero to eight levels, depending on the actual lithography conditions. By contrast, the phase modulation pattern for both LCoS-S and LCoS-D can range from 0 to 256 levels. Figure 10(a) illustrates the spectral image reconstruction results of three different models for four scenes in the test dataset. The LCoS-D model outperforms the other two models in terms of noise level and artifacts, as shown in the visualization. This suggests that the LCoS-D model has superior reconstruction performance. Furthermore, we randomly selected a 4×4-pixel region in each scene and computed the average spectral radiance of the reconstructed images generated by the above three models. Figure 10(b) presents the spectral radiance curves for the ground truth and the three models within the same region. The spectral radiance data indicate that the LCoS-D model yields the smallest reconstruction error.

    Comparison of spectral reconstruction simulations for different models. (a) Comparing the four reconstruction data results and visual effects, the diffractive optical imaging model based on LCoS-SLM can effectively improve the reconstruction performance and avoid the degradation of the reconstruction results caused by the quantized DOE. (b) Spectral radiance curves for different models. The spectral curves show that the reconstructed spectral curves of LCoS-D are closer to the ground truth values.

    Figure 10.Comparison of spectral reconstruction simulations for different models. (a) Comparing the four reconstruction data results and visual effects, the diffractive optical imaging model based on LCoS-SLM can effectively improve the reconstruction performance and avoid the degradation of the reconstruction results caused by the quantized DOE. (b) Spectral radiance curves for different models. The spectral curves show that the reconstructed spectral curves of LCoS-D are closer to the ground truth values.

    Performance comparison of hyperspectral reconstruction using fabricated DOE and simulated DOE loaded onto LCoS-SLM. (a) Comparison of PSNR for hyperspectral image reconstruction with different models. (b) Comparison of SSIM metrics for hyperspectral image reconstruction with different models. (c) Comparison of RMSE metrics for hyperspectral image reconstruction with different models. (d) Comparison of ERGAS metrics for hyperspectral image reconstruction with different models.

    Figure 11.Performance comparison of hyperspectral reconstruction using fabricated DOE and simulated DOE loaded onto LCoS-SLM. (a) Comparison of PSNR for hyperspectral image reconstruction with different models. (b) Comparison of SSIM metrics for hyperspectral image reconstruction with different models. (c) Comparison of RMSE metrics for hyperspectral image reconstruction with different models. (d) Comparison of ERGAS metrics for hyperspectral image reconstruction with different models.

    We conducted simulations to evaluate the root mean square error (RMSE) and the normalized absolute spectral error (ERGAS) metrics of reconstruction. Four models were used: full-precision DOE (DOE-DO), quantization-aware DOE (DOE-QDO), LCoS-S, and LCoS-D. These models were chosen based on the comparative methods outlined in Ref. [39]. The results of the RMSE and ERGAS are shown in Figs. 11(c) and 11(d). The LCoS-D model closely matches the performance of the ideal full-precision DOE model. This highlights the potential capabilities of the LCoS-SLM in DOE optimization and substitution.

    APPENDIX G: COMPARISON OF LESHI WITH TYPICAL HYPERSPECTRAL IMAGING MODALITIES

    To assess the performance of the proposed models in LESHI, we conducted simulations and compared them with representative SHI systems, namely, Fresnel lens [36] and CASSI systems [14]. To ensure a fair comparison, we used the same reconstruction network for all-optical coding models as the one used in Appendix D. Additionally, all models were trained against 50 periods using the same optimizer configuration. The reconstruction results for the different coding methods can be found in Table 1. The LCoS-SLM-based spectral imaging system offers an advantage over other snapshot hyperspectral coding methods, which is further enhanced by the proposed volume-distributed diffractive optical model and its adaptive mechanism.

    PSNR, SSIM, RMSE, and ERGAS Simulation Results of Spectral Image Reconstruction Using Different Models

    EncodingPSNR ↑SSIM ↑RMSE ↓ERGAS ↓
    CASSI [14]30.650.8970.035620.72
    Fresnel [36]27.420.8680.055730.16
    DOE [39]31.690.9350.032219.93
    LCoS-S33.900.9600.028514.88
    LCoS-D35.420.97680.020912.85

    [18] Y. Peng, Q. Fu, W. Heidrich. The diffractive achromat full spectrum computational imaging with diffractive optics. SIGGRAPH ASIA 2016 Virtual Reality meets Physical Reality: Modelling and Simulating Virtual Humans and Environments, 1-2(2016).

    [26] J. W. Goodman. Introduction to Fourier Optics(2005).

    [27] S. H. Baek, H. Ikoma, M. H. Kim. Single-shot hyperspectral-depth imaging with learned diffractive optics. IEEE/CVF International Conference on Computer Vision, 2651-2660(2021).

    [30] C. A. Metzler, H. Ikoma, G. Wetzstein. Deep optics for single-shot high-dynamic-range imaging. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1375-1385(2020).

    [39] L. Li, L. Wang, W. Song. Quantization-aware deep optics for diffractive snapshot hyperspectral imaging. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19780-19789(2022).

    [47] O. Ronneberger, P. Fischer, T. Brox. U-Net: convolutional networks for biomedical image segmentation. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 234-241(2015).

    [57] D. Wang, N. N. Li, Y. L. Li. Large viewing angle holographic 3D display system based on maximum diffraction modulation. Light Adv. Manuf., 4, 195-205(2023).

    [58] B. Arad, O. Ben-Shahar. Sparse recovery of hyperspectral signal from natural RGB images. 14th European Conference on Computer Vision (ECCV), 19-34(2016).

    [60] K. Zhang, L. V. Gool, R. Timofte. Deep unfolding network for image super-resolution. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3217-3226(2020).

    Tools

    Get Citation

    Copy Citation Text

    Chong Zhang, Xianglei Liu, Lizhi Wang, Shining Ma, Yuanjin Zheng, Yue Liu, Hua Huang, Yongtian Wang, Weitao Song, "Lensless efficient snapshot hyperspectral imaging using dynamic phase modulation," Photonics Res. 13, 511 (2025)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems, Microscopy, and Displays

    Received: Oct. 7, 2024

    Accepted: Nov. 21, 2024

    Published Online: Feb. 10, 2025

    The Author Email: Weitao Song (swt@bit.edu.cn)

    DOI:10.1364/PRJ.543621

    CSTR:32188.14.PRJ.543621

    Topics