Ultrafast multidimensional optical imaging (UMOI)1
Advanced Photonics, Volume. 7, Issue 2, 026004(2025)
Single-shot spatial-temporal-spectral complex amplitude imaging via wavelength-time multiplexing
Single-shot ultrafast multidimensional optical imaging (UMOI) combines ultrahigh temporal resolution with multidimensional imaging capabilities in a snapshot, making it an essential tool for real-time detection and analysis of ultrafast scenes. However, current single-shot UMOI techniques cannot simultaneously capture the spatial-temporal-spectral complex amplitude information, hampering it from complete analyses of ultrafast scenes. To address this issue, we propose a single-shot spatial-temporal-spectral complex amplitude imaging (STS-CAI) technique using wavelength and time multiplexing. By employing precise modulation of a broadband pulse via an encoding plate in coherent diffraction imaging and spatial-temporal shearing through a wide-open-slit streak camera, dual-mode multiplexing image reconstruction of wavelength and time is achieved, which significantly enhances the efficiency of information acquisition. Experimentally, a custom-built STS-CAI apparatus precisely measures the spatiotemporal characteristics of picosecond spatiotemporally chirped and spatial vortex pulses, respectively. STS-CAI demonstrates both ultrahigh temporal resolution and robust phase sensitivity. Prospectively, this technique is valuable for spatiotemporal coupling measurements of large-aperture ultrashort pulses and offers promising applications in both fundamental research and applied sciences.
1 Introduction
Ultrafast multidimensional optical imaging (UMOI)1
Ss-UMOI can be categorized into two types: framing photography12 and computational imaging.13 Framing photography rapidly captures dynamic scenes at multiple time instants using ultrafast gating or time mapping. Ultrafast gating is achieved using sequential gating techniques, such as nonlinear optical gating14 and electronic gating.15 Time mapping, in contrast, transfers temporal information to other dimensions, such as space,16 spatial frequency,17 angle,18 wavelength,19 or polarization,20,21 and maps it to different sensor positions. This approach allows the capture of multiple high-resolution images within an extremely brief time slice, making it ideal for the direct observation of dynamic processes. However, the sequence depths (i.e., the number of frames per exposure) captured by these techniques are typically limited. Computation-based ss-UMOI integrates image reconstruction methods into the former to capture and reconstruct multidimensional information of dynamic scenes. Representative implementations of this approach include compressed ultrafast photography (CUP),22,23 compressed ultrafast spectral-temporal photography,24,25 and swept-coded aperture femtophotography.26 The core concept is to leverage computational algorithms to overcome the limitations of traditional hardware, enabling the single-shot observation of ultrafast phenomena on extremely short time scales, followed by reconstructing multidimensional data across space , time (), spectrum (), and even more dimensions. CUP is a notable example of extending the one-dimensional (1D) imaging capability of traditional streak cameras to a two-dimensional (2D) space using compressed sensing.27 This technique achieves an ultrafast imaging speed of 10 trillion frames per second and an extremely high sequence depth of over 300 frames. Several improved CUP-based methods have been developed to further expand the imaging dimensions. For instance, hyperspectral CUP6 captures four-dimensional spatial-temporal-spectral , whereas stereo-polarimetric CUP28 can detect five-dimensional spatial-temporal-polarization . Other advancements include spectral-volumetric CUP capturing 29 and polarimetric-spectral-sensitive CUP capturing .30 These methods have significantly expanded the dimensionality of imaging, allowing the capture of complex and high-speed events with excessive detail.
Despite these advancements, phase measurement using the intensity-sensitive UMOI in isolation remains a challenge. To enable phase measurement capability, recently, several ss-UMOI techniques that can acquire complex amplitude information of ultrafast light fields have emerged. These ultrafast complex amplitude imaging techniques enable simultaneous multiframe perception of intensity and phase information by combining phase-computed imaging techniques with several ultrafast imaging methods. For instance, single-shot ultrafast holographic microscopy31 uses the multiplexing of time and spatial frequencies to capture high-speed dynamic scenes with an ultrahigh temporal resolution of 200 fs and a sequence depth of 14 in a single exposure. However, it requires multiple reference pulses from spatial light modulators and custom echelons, limiting its general applicability. Compressed optical field topography32 integrates coded aperture snapshot spectral imaging with global three-dimensional (3D) phase retrieval to fully characterize a laser pulse’s spatiotemporal coupling in one shot. Nonetheless, it relies on two iterative algorithms and noncollinear frequency-resolved optical gating data, which can introduce significant reconstruction errors and is limited to broadband laser pulses. Lensless single-shot ultrafast optical imaging33 uses an acoustic-optic programmable dispersion filter for wavelength filtering of chirped pulses combined with digital inline holography, enabling ultrafast imaging both in picosecond and nanosecond timescales. This method, however, requires a complex delay optical path and has limited sequence depth. The single-shot ultrafast phase retrieval imaging technique34 employs multiangle illumination with a coherent diffraction imaging (CDI) algorithm using a multiplexed time-delay chip to generate delayed pulses, but the fixed time delays reduce flexibility. Multiwavelength multiplexed phase imaging can achieve spatial multiplexing and effectively increase the sequence depth,35,36 but it still faces insufficient information convergence in single-shot phase imaging. Introducing a strong modulation plate into coherent modulation imaging (CMI) can effectively improve the convergence of CDI when the number of diffraction patterns is insufficient.37 Simultaneously, multiwavelength CMI can be used to achieve spatially multiplexed phase imaging.38 A single-shot ultrafast multiplexed CDI technique39 applies a multiplexed strategy to achieve high-resolution ultrafast intensity and phase imaging in a single shot. However, multi-angle burst illumination necessitates a combination of multiple optical fibers, which, in turn, limits the sequence depth to 4.
Sign up for Advanced Photonics TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now
Although ultrafast imaging technology is advancing rapidly, a suitable technical solution for simultaneously recording intensity and phase evolution across the spatial, spectral, and temporal dimensions for ultrafast light field measurement remains elusive. To overcome these limitations, we have developed a single-shot spatial-temporal-spectral complex amplitude imaging (STS-CAI) technique using wavelength and time multiplexing. STS-CAI utilizes the time-deflection characteristics of the streak camera and the dual-modal (i.e., wavelength and time) multiplexing CDI to achieve single-shot spatiotemporal- and spectral-resolved ultrafast imaging both in intensity and phase. In image reconstruction, a compression-multiplexed phase-retrieval algorithm was developed to reconstruct the spatial amplitude and phase information of the light field at different time instants and wavelengths. The effectiveness of this physical model and algorithm in accurately reconstructing complex dynamic scenes is theoretically demonstrated through a numerical simulation. In the experiment, the STS evolutions of a picosecond spatiotemporally chirped pulse and a spatial vortex pulse with the same bandwidth of 40 nm were measured, respectively. In addition, the ultrahigh temporal and spectral resolutions enable the use of the Fourier transform of time and spectral frequency to retrieve their spatiotemporal information, enabling the measurement of spatiotemporal coupling wavefronts. STS-CAI provides a more comprehensive view of dynamic scenes, and its capability of achieving a high temporal resolution without requiring repeatability or scanning makes it superior for studying ultrafast processes. This richer information enhances our ability to visualize and track ultrafast phenomena and facilitates a deeper analysis of dynamic processes.
2 Material and Methods
As shown in Fig. 1, the framework of the whole system includes two parts: image acquisition and image reconstruction. The schematic of the forward image acquisition of STS-CAI is shown in Fig. 1(a), in which the iteration planes include the constraint plane , the object plane , the encoding plate plane , and the streak camera plane . The distances between them are , , and , respectively. First, is used to represent an ultrafast scene, which is a broadband ultrafast light field with a pulse duration of after passing through the object. After propagating a distance of , the light field experiences the phase encoding modulation and further propagates another distance of . Finally, a time-deflected and superimposed diffraction pattern with intensity distribution is recorded after deflection and integration. This method uses a wide-open-slit streak camera to deflect the ultrafast light field at different time instants to different positions in space. The refractive index variations of the phase encoding plate with wavelength, combined with the spatial-temporal shearing capability of the streak camera, enable dual-modal multiplexing of wavelength and time. As shown in the diffraction pattern of Fig. 1(a), the pattern at each time instant after deflection contains multiple spectral components, which are represented as and with and being the time and wavelength sequence numbers, respectively. Furthermore, the deflection angle can serve as prior information to enhance the reconstruction outcomes, thereby accelerating the convergence speed of the complex amplitude. In image reconstruction, based on a single diffraction pattern image and a self-developed compression-multiplexed phase retrieval algorithm, the time- and spectrum-resolved complex amplitude information of the ultrafast light field is reconstructed.
Figure 1.Schematic diagram of STS-CAI. (a) Schematic diagram of the forward image acquisition.
A flow chart of the compression-multiplexed phase retrieval algorithm for image reconstruction of STS-CAI is shown in Fig. 1(b). In the iterative process, two nested loop systems of time, , and wavelength, , are used to perform iterative calculations on each wavelength at a certain time point until the iteration reaches the central wavelength position where the energy is concentrated. The light field of the initial estimate on the object plane is , , . After propagating a distance of in free space, the light field can be expressed as follows:
Here, is the complex amplitude distribution of the encoding plate transmittance for the th wavelength, which can be reconstructed using an extended ptychographic iterative engine.40 By scanning the encoding plate at multiple positions, the complex amplitude of the encoding plate can be reconstructed with high precision. The diffraction pattern deflected with time after the light field spreading to the streak camera in free space is as follows:
This update process is represented by in Fig. 1(b). The updated light field is propagated back to the phase-encoding plate plane, which is expressed as , and then updated to obtain the pulse wavefront in front of the encoding plate, which is denoted as
The next iteration is performed until the convergence error is below the threshold value. It should be noted that the spectral range of the light field should better be determined by simultaneously measuring its split signal with a spectrometer to improve the accuracy and convergence speed for wavelength. Through the above iterative process, the spatial-spectral information of the ultrafast light field at different time instants can be obtained. At each specific moment, the diffraction and retrieval of light beams at different wavelengths occur coaxially, and the parameters employed to distinguish the light field information corresponding to these wavelengths primarily depend on the modulation characteristics of the encoding plate, as well as the diffraction characteristics of the light field.
A numerical simulation was performed based on the imaging model of STS-CAI. In the simulation, a temporally chirped ultrashort pulse carrying rotation dynamics both in intensity and phase, as well as intensity variations in wavelength was generated. The central wavelength was 790 nm, and the light field at each moment corresponded to a spectral full width at half-maximum (FWHM) of 18 nm. There are eight time points in the simulation process, and the wavelength covers 769 to 808 nm, with an interval between adjacent wavelengths of 3 nm. The object rotates by 20 deg at adjacent time instants, and at a certain moment, the energy ratios of the seven wavelengths are [0.4, 0.6, 0.8, 1, 0.8, 0.6, 0.4]. Being reconstructed with the compression-multiplexed phase-retrieval algorithm, the images at eight moments and seven wavelengths corresponding to the object’s complex amplitude were reconstructed. The reconstruction object amplitude results are shown in Fig. 2(a), including the object amplitudes corresponding to different wavelengths at time instants of to . Here, the amplitude images can be reconstructed in a single shot based on the compression-multiplexed phase-retrieval algorithm, and the right side of Fig. 2(a) shows the ground truth at each moment. Figures 2(b) and 2(c) are the object phase truth and retrieval results at the central wavelength at time , respectively. By comparison, this method can accurately retrieve the spatial phase information. Figure 2(d) shows a 1D curve drawn along the short red lines in Figs. 2(b) and 2(c). Comparison of the curves shows consistency between the ground truth and STS-CAI retrieval results, with an error of . In addition, as shown in Fig. 2(e), the peak SNR (PSNR) and structural similarity (SSIM) of the central wavelength at each moment in Fig. 2(a) were calculated and compared with the true value. The PSNR at the first moment is lower than those at the following seven moments, oscillating between 25.8 and 26.8 dB. The SSIM of the reconstructed results is stable within the range of 0.85 and 0.9. These results demonstrate the high simulation fidelity in both accuracy and reliability, validating the correctness of the model and further reinforcing the applicability and credibility of the reconstruction algorithm.
Figure 2.STS-CAI simulation results. (a) Reconstructed amplitude information at different wavelengths and different times and the ground truth of the object at different times. (b) The phase ground truth of the object at time
3 Experimental Results
3.1 Experimental Optical Path Design and Time-Spectrum Measurement
STS-CAI utilizes a randomly distributed phase-encoding plate and a streak camera with a fully opened entrance slit for data acquisition and a compression-multiplexed phase-retrieval algorithm for image reconstruction to achieve a single-shot measurement of an ultrashort pulse. The experimental optical path is shown in Fig. 3(a). A femtosecond laser pulse was broadened to 200 ps after passing through the pulse-stretching device and then passed through the half-wave plate and linear polarizer to achieve light-field energy adjustment. The ultrashort pulse then passed through a beam expander lens and was modulated by the object to form the measured ultrafast scene. The scene to be measured can be represented as , which is modulated by the phase-encoding plate after propagating a distance of , and the at each wavelength is modulated independently. This illustration shows the random phase distribution of the encoding plate, which has a design value of binary distribution corresponding to a wavelength of 632.8 nm and has a unit size of . In view of the reduction in the diffraction performance of the light beam in the diffraction pattern collected by the streak camera, the use of encoding plates with smaller sizes is constrained. The phase difference between the lowest point () and the highest point () at the wavelength of 790 nm was 2.5 rad. For other wavelengths, the distribution matrix can be calculated using , where represents the phase distribution matrix shown in the illustration. The phase distribution of the encoding plate corresponding to the three wavelengths is further shown in Note I in the Supplementary Material. The modulated diffraction pattern, after propagating another distance of , is time-deflected by the streak camera, and different temporal information was distributed at different positions of an internal planar detector. In image reconstruction, the compression-multiplexed phase-retrieval algorithm was applied to reconstruct the spatial amplitude and phase information of the light field at different time instants and wavelengths. Finally, based on the free-space propagation theory of the light field, the complex amplitude distribution of the light field at the object surface was obtained.
Figure 3.Experimental optical path and time spectrum curve. (a) Experimental optical path and phase distribution diagram of the encoding plate. (b) Temporal-spectral distribution recorded by the streak camera in 1D mode. (c) 1D spectral intensity curve and time-spectrum curve.
In the experiment, by narrowing the entrance slit width of the streak camera down to tens of micrometers, a 1D object image of the ultrashort pulse was obtained with a spectrograph (SP2300, Princeton Instruments) for validation, as shown in Fig. 3(b). This process is to verify the effectiveness of the chirped pulse generation module. According to the pattern, the time duration and spectral width of the pulse can be calculated. Therefore, the spectral intensity curve corresponding to the time-spectrum curve is plotted according to the pattern, as shown in Fig. 3(c). The pulse duration was 200 ps, and the spectral width was 44 nm. When the streak camera slit was widened, a large-aperture 2D time-deflected diffraction pattern was obtained. Based on this single diffraction pattern and the compression-multiplexed phase-retrieval algorithm, the complex amplitude distribution of the light field at different time instants and the corresponding spectral information were reconstructed.
The experimental setup is configured with the following parameters. The phase-encoding plate features a step distribution modulation of rad at 632.8 nm, with a minimum unit size of . The laser system consists of a Ti:sapphire femtosecond laser amplifier (ThunderWave-1k, Physcience Opto-electronics) that outputs laser pulses with a pulse duration of and a central wavelength of 790 nm. These pulses are temporally chirped to 200 ps using a pulse stretcher. The streak camera (C7700, Hamamatsu Photonics K.K.) serves as a time-deflected imaging device. It converts incident light into electrons proportional to the light intensity, deflects them vertically based on the scanning voltages of sweeping electrodes, and reconverts the deflected electrons into light. This light is subsequently integrated by a complementary metal oxide semiconductor (CMOS) camera (ORCA-flash 4.0, Hamamatsu Photonics K.K.) with a pixel size of . A digital delay generator (DG645, Stanford Research Systems) precisely synchronizes the entire system, ensuring the ultrafast optical signal aligns within the streak camera’s time window. Based on the aperture of the light beam to be measured, the unit size of the encoding plate, and the diffraction pattern, a diffraction distance is chosen that can capture of the high-frequency information. In the experiment, and were selected in a diffraction distance range of . For spatiotemporally chirped pulse measurements, a near-infrared transmission grating (GTI25-03A, Thorlabs, Inc.) with , a slot angle of 31.7 deg, and dimensions of is utilized. In the spatial vortex pulse measurement, a vortex phase plate (VR2-795, Lbtek Co., Ltd.) designed for a wavelength of 795 nm with a topological charge of 2 is employed.
3.2 Spatiotemporally Chirped Pulse Measurement with STS-CAI
The precise spatiotemporal distribution of the wavelength within high-energy ultrashort pulses is crucial for maximizing the peak power after focusing. Consequently, it is imperative to achieve high-precision measurements of spatiotemporally chirped pulses, which allows for the indirect determination of their energy distributions at the focus. The spatiotemporal distribution of a spatiotemporally dispersive ultrashort pulse was experimentally measured using the STS-CAI system. Through the multiplexed phase-retrieval algorithm, the overlapping diffraction patterns can be decomposed into the complex amplitude information of the light field across various wavelengths. Figure 4(a) shows the experimental setup, in which the broadband picosecond pulse passes through a transmission grating with at an incident angle of 31.7 deg to generate spatial chirp, and is then modulated by the phase-encoding plate. Finally, the intensity of the time-deflected diffraction pattern is recorded using a streak camera. As shown in Fig. 4(b), the different spectral components of the pulse are expanded in the direction, and the different time components are expanded in the direction. The spatially complex amplitude distribution of the spatiotemporally chirped pulse at different time instants and wavelengths was retrieved using this diffraction pattern and the compression-multiplexed phase-retrieval algorithm.
Figure 4.Spatiotemporally chirped pulse measurement with STS-CAI. (a) The optical path of the measurement with STS-CAI. (b) The diffraction pattern recorded by the streak camera. (c) Amplitude distribution of the pulse reconstructed by STS-CAI. (d) Phase distribution of the pulse reconstructed by STS-CAI. (e) Temporal intensity and phase curves of the pulse at points A and B. (f) Spectral intensity and phase curves of the pulse at points A and B. The unit of the color bar in panel (d) is radians.
Figures 4(c) and 4(d) show the amplitude and phase images of the pulse at the corresponding time ( to 100 ps) and wavelength (766 to 810 nm), respectively. The spatiotemporally chirped pulse energy distribution changes in both the and directions with variations in wavelength and time. For example, in Fig. 4(c), the energy density movement direction from to 0 ps is , the movement direction in the time range from 20 to 40 ps is , and finally, the movement direction in the time range from 60 to 100 ps becomes . Therefore, this method can calculate the change in energy density and retrieve the phase change of the light field owing to the change in energy density, as shown in Fig. 4(d). In Fig. 4(d), the positions of the phase-mutation regions, labeled Q1–Q3, shift at different times, and their sizes change significantly. Based on the phase and amplitude values combined with the free-space transmission equation, the light-field distribution at the focal position can be obtained; the focal position analysis is discussed in Sec. 3.4. Owing to the change in energy density, the change in different pulse positions can be obtained based on Fig. 4(c). As shown in Figs. 4(e) and 4(f), the pulse durations at points A and B were 141 and 142 ps, respectively, and the spectral widths at these two points were 36 and 37 nm, respectively. In addition, there is a one-to-one correspondence between the time and spectrum; therefore, the phase distribution of the time and spectrum can be calculated using the following formula:41,42
3.3 Spatial Vortex Pulse Measurement with STS-CAI
STS-CAI has also been utilized to detect an ultrashort spatial vortex pulse based on a simple diffraction optical path. The experimental system is shown in Fig. 5(a), in which the temporally chirped pulse is phase modulated by a vortex phase plate with a topological charge number of two and becomes an ultrashort spatial vortex pulse. It is then modulated by a phase-encoding plate and forms a diffraction pattern after propagation. Finally, the time-varying diffraction pattern is temporally deflected and integrated into space using a streak camera for inverse spatiotemporal reconstruction. As shown in Fig. 5(b), the temporal information is deflected in the direction, and different temporal information can be reconstructed using the compression-multiplexed phase-retrieval algorithm. In addition, the algorithm can reconstruct the spectral information corresponding to different time instants.
Figure 5.Spatial vortex pulse measurement with STS-CAI. (a) The optical path of the measurement with STS-CAI. (b) The diffraction pattern recorded by the streak camera. (c) Amplitude distribution of the pulse reconstructed by STS-CAI. (d) Phase distribution of the pulse reconstructed by STS-CAI. (e) Temporal intensity and phase curves at points A and B. (f) Spectral intensity and phase curves at points A and B. The unit of the color bar in panel (d) is radians. (
As shown in Figs. 5(c) and 5(d), the amplitude and phase distributions of the pulses were calculated. As shown in Fig. 5(c), the vortex rotates over time. It rotates along the direction in the time range of to 20 ps, but rotates along the direction in the range of 40 to 80 ps. Therefore, the pulse has a quadratic nonlinear chirp effect, which can be analyzed through the spatiotemporal electric field distribution in Sec. 3.4. Figure 5(d) shows the phase reconstruction results, which show that the number of phase topological charges is 2, and a phase rotation corresponding to the amplitude is generated. It rotates along the direction in the time range of to 20 ps but rotates along the direction in the range of 40 to 80 ps. The details of the phase rotation angle are shown in Note II in the Supplementary Material. According to the reconstruction results, the pulse durations at different points and the corresponding spectral information can be analyzed, as shown in Figs. 5(e) and 5(f). Figure 5(e) shows the pulse duration distribution at points A and B, and Fig. 5(f) shows the spectrum distribution corresponding to the time in Fig. 5(e). The time-domain pulse widths at points A and B were 184 and 163 ps, respectively, whereas the spectral widths at points A and B were 42 and 45 nm, respectively. There are differences in the intensity curves of points A and B in Fig. 5(e). For instance, the intensity at the center of point A is concave, whereas that at point B is convex, and there is a front-to-back offset at the center of the pulse duration of points A and B. The corresponding spectral centers exhibited opposite shifts before and after, and the intensity curves of the spectra corresponding to time also exhibited consistent differences. Consequently, the coefficients of the quadratic chirp terms at the two points are different, which is crucial for further improving the beam spatiotemporal quality.43 Based on Eq. (8) and the GS algorithm in the time domain, the phase distributions of time and spectrum can be obtained. Figures 5(e) and 5(f) show the time and spectral phase distributions at points A and B. Owing to the different chirp amounts and spectral phase slopes, there is also a significant difference in the frequencies at points A and B. The above experimental results prove that the STS-CAI can realize the ultrafast measurement of ultrashort spatial vortex pulses and can be used to detect the energy density distribution of pulses at different time instants. The complete amplitude and phase evolutions are shown in Video 1 and Video 2, respectively. The temporal resolution of the video was 4 ps, which corresponds to a spectral FWHM resolution of 0.8 nm.
3.4 Analysis of the Spatiotemporal Distributions of Ultrashort Pulses
Utilizing STS-CAI, the STS distribution of a chirped pulse can be accurately determined. Furthermore, the STS at the focus can be reconstructed through the analysis of light-field complex amplitude propagation. Ensuring uniformity of the STS at focus is crucial for enhancing the peak power of ultrashort pulses. For comparison, the 3D spatiotemporal distributions of the spatiotemporally chirped and spatial vortex pulses reconstructed with STS-CAI are shown in Figs. 6(a)–6(d) and Figs. 6(e)–6(h), respectively. Figure 6(a) shows the normalized intensity contour of the chirped picosecond pulse dispersed by the transmission grating measured with our system. The normalized intensities of orange, yellow, and green contours are 0.3, 0.5, and 0.8, respectively. Furthermore, based on the 3D outline, the projection diagrams on the , , and planes can be obtained directly. The FWHM of the intensity along the green dotted line is 1.95 mm. At the focal position, Figs. 6(b) and 6(f) display the 3D distributions of the STS of the light fields depicted in Figs. 6(a) and 6(e). This result was achieved through spatial propagation using angular spectrum propagation and spatiotemporal coupling calculations based on Eq. (8). The FWHM of the intensity at the green dotted line in Fig. 6(b) is 0.06 mm. According to the time amplitude and phase , as well as the central frequency , the electric field of the chirped pulse can be calculated. Figures 6(c) and 6(d) show the electric field , amplitude , and phase curves at points and in Fig. 6(a), respectively. Figure 6(c) shows that the second derivative of the phase , and the frequency of the electric field in front is smaller than that in the tail, which corresponds to the long-wavelength components in the front and the short-wavelength ones in the tail. Figure 6(d) shows the temporal intensity and phase curves at point . There is a depression in the middle of the temporal intensity distribution, and the second derivative of the phase , which is the same as that at point , whereas the first-order term is different from that at point , and peak intensities in the time and wavelength are shifted, corresponding to Figs. 4(e) and 4(f).
Figure 6.STS reconstruction results based on STS-CAI. (a) Measured 3D spatial and temporal distribution of the spatiotemporally chirped pulse at the phase encoding plate. (b) Calculated 3D spatial and temporal distribution of the spatiotemporally chirped pulse at focus. (c), (d) The electric field, amplitude, and phase curves at points
The aforementioned 3D spatiotemporal distributions can also be obtained for the spatial vortex pulses. Figure 6(e) shows the normalized intensity contour measured, and the FWHM of the intensity at the green dotted line was 1.43 mm. Figure 6(f) shows the calculated ST of the light field at the focus; this result was also obtained through the transmission of the complex amplitude light field. The FWHM of the intensity at the green dotted line in Fig. 6(f) was 0.1 mm. The STS-CAI method can also be used to obtain the electric field , amplitude , and phase curves at points and , as shown in Figs. 6(g) and 6(h), respectively. The chirp distribution can be obtained using and . Similarly, the first-order phase term in Fig. 6(g) is different from that in Fig. 6(h), which corresponds to the shifts in the peak intensities in time and wavelength at point compared with point . Therefore, the STS-CAI method can reconstruct spatiotemporal information of a chirped pulse in a single shot. Reconstructing spatiotemporal information at different spatial positions is crucial for improving the beam quality of high-energy ultrashort pulses.
4 Discussion and Conclusion
In this study, what we believe is a novel single-shot ultrafast STS measurement method called STS-CAI was proposed to simultaneously measure high-dimensional STS information in an ultrafast laser pulse. STS-CAI utilizes a multiplexed encoding CDI algorithm to simultaneously reconstruct the amplitude and phase information across various wavelengths and time instants from a single-shot broadband diffraction pattern. In addition, the streak camera deflects ultrafast information, whereas the encoding plate exhibits distinct modulation and diffraction characteristics tailored to light fields of different wavelengths; therefore, the STS-CAI can obtain complex amplitudes at different wavelengths and time instants from a single-shot diffraction pattern via dual-mode multiplexing of wavelength and time. The time-domain GS algorithm transform iteration was combined to achieve the STS phase of the ultrashort pulses. More importantly, STS-CAI enables lensless imaging, does not require a complex optical path synchronization system, and minimizes the dispersion errors caused by the system components. In the experiment, the STS information of picosecond laser pulses was precisely measured by the STS-CAI system with a temporal resolution of 4 ps, a spectral resolution of 0.8 nm, and a high sequence depth of 50 both in intensity and phase. The properties at the focus were also retrieved through complex-amplitude light-field transmission. STS-CAI has significant application potential in the ultrafast STS measurement of singular pulses, high-energy pulses, ultraviolet-band pulses, and broadband ultrashort pulses.
For high-energy ultrashort pulses in large facilities, the increase in peak power requires extremely high spatiotemporal quality to avoid damaging devices; therefore, the measurement of the spatiotemporal performance of the chirped pulse is extremely important. In addition, the STS-CAI is a lensless imaging system that can be used for spatiotemporal measurements in the extreme ultraviolet wavelength band. It is anticipated that high-spatial-resolution ultrashort-pulse STS measurements in the ultraviolet region will be achieved in the future, presenting the potential for ultrafast imaging with high spatial resolution and contributing significantly to the advancement of ultrafast superresolution techniques. Finally, the wide spectrum measurement capability of this method can be used for ultrafast measurement of attosecond harmonics to break the femtosecond time resolution limit of the streak camera. Consequently, STS-CAI will be crucial for the measurement of ultrashort pulses and ultrafast phenomena with high spatial and temporal resolutions. The improvement of STS-CAI can be focused on the following three aspects. First, time deflection is implemented based on the streak camera, and the current fastest time resolution is 108 fs;44 therefore, the pulse duration that can be measured is limited. Second, the diameter of the measurement pulse is limited by the slit width of the streak camera, which is generally 5 mm. During the imaging process, within the limited range of the slit, if the beam diameter increases, the distance needs to be reduced to meet the collection of high-frequency information. Laser beam demagnification is required for larger-diameter laser field measurements; therefore, the spatial resolution needs to be improved to achieve the ability to measure large-diameter ultrashort pulses. In addition, the Coulomb repulsion among electrons within the streak camera diminishes the coherence of the diffraction pattern, thereby constraining enhancements in spatial resolution. This constraint necessitates that the illumination intensity remains within a permissible range for the streak camera to mitigate the escalation of noise in the diffraction pattern attributable to Coulomb repulsion. Finally, the streak camera used in this technology reduces the interference of the diffraction pattern owing to electron-beam diffusion characteristics,45 which affect the spatial resolution and phase measurement accuracy. Therefore, in future research, it is extremely important to use optimization algorithms to improve the coherence performance of the diffraction pattern and improve the spatial resolution and phase measurement accuracy of STS-CAI. In addition, high-spatial-resolution STS measurements can be achieved using the ultrafast deflection characteristics of electro-optical crystals.46 Based on these extraordinary prospects, it is anticipated that STS-CAI, as a single-shot imaging technique, will be highly applicable in various fields of ultrafast optical imaging, particularly broadband ultrashort-phase imaging.
Yingming Xu is a postdoc at Research Center for Novel Computational Sensing and Intelligent Processing, Zhejiang Lab. He received his PhD in optical Engineering from Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences (CAS), in 2023. His current research interest focuses on ultrafast optical phase imaging.
Chengzhi Jin is an assistant professor at College of Electronics and Information Engineering, South-Central Minzu University. He received his PhD in optics from East China Normal University in 2024. His current research interest focuses on computational optical imaging.
Liangze Pan is an assistant professor at College of Optical and Electronic Technology, China Jiliang University. He received his PhD in optical engineering from Shanghai Institute of Optics and Fine Mechanics, CAS, in 2021. His current research interest focuses on phase retireval and ultrafast event diagnostic.
Yu He is a PhD student at State Key Laboratory of Precision Spectroscopy, East China Normal University under the supervision of Prof. Shian Zhang. His research focuses on high-speed super-resolution microscopy.
Yunhua Yao is an associate professor at State Key Laboratory of Precision Spectroscopy, East China Normal University (ECNU). He received his PhD in optics from ECNU in 2018. His current research interest focuses on high-speed super-resolution microscopy and ultrafast optical imaging.
Dalong Qi is a young professor at State Key Laboratory of Precision Spectroscopy, East China Normal University (ECNU). He received his PhD in optics from ECNU in 2017. His current research interest focuses on ultrafast optical and electronic imaging techniques and their applications.
Cheng Liu is a professor at National Laboratory on High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics (SIOM), CAS. He received his PhD in optical engineering from SIOM in 2003. His current research interest focuses on optical measurement and optical imaging.
Junhui Shi is a professor at Computational Sensing Research Center, Zhejiang Lab. He received his PhD in Chemistry from Princeton University in 2013. His current research interest focuses on biomedical ultrasound and photoacoustic imaging, high-performance sensors, and artificial intelligence in imaging science.
Zhenrong Sun is a professor at State Key Laboratory of Precision Spectroscopy, East China Normal University (ECNU). He received his PhD in physics from ECNU in 2007. His current research interest focuses on ultrafast dynamics of clusters and ultrafast optical imaging.
Shian Zhang is a professor and the deputy director of State Key Laboratory of Precision Spectroscopy, East China Normal University (ECNU). He received his PhD in optics from ECNU in 2006. His current research interest focuses on ultrafast optical imaging, high-speed super-resolution microscopy, and light field manipulation.
Jianqiang Zhu is a professor at National Laboratory on High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics (SIOM), CAS. He received his PhD in optical engineering from SIOM in 1993. His current research interest focuses on overall optical design, structural design, measurement and control technology of laser drivers.
[13] J. Liang, C. Webb, J. Jones, L. V. Wang. Ultrafast optical imaging. Handbook of Laser Technology and Applications, 315-328(2021).
[27] Y. C. Eldar, G. Kutyniok. Compressed Sensing: Theory and Applications(2012).
Get Citation
Copy Citation Text
Yingming Xu, Chengzhi Jin, Liangze Pan, Yu He, Yunhua Yao, Dalong Qi, Cheng Liu, Junhui Shi, Zhenrong Sun, Shian Zhang, Jianqiang Zhu, "Single-shot spatial-temporal-spectral complex amplitude imaging via wavelength-time multiplexing," Adv. Photon. 7, 026004 (2025)
Category: Research Articles
Received: Nov. 19, 2024
Accepted: Feb. 13, 2025
Posted: Feb. 13, 2025
Published Online: Mar. 10, 2025
The Author Email: Qi Dalong (dlqi@lps.ecnu.edu.cn), Liu Cheng (chengliu@siom.ac.cn), Shi Junhui (junhuishi@outlook.com)