1. INTRODUCTION
Optical imaging of ultra-fast phenomena [1,2] provides critical information for understanding fundamental aspects of the world we live in [3–5]. The recording of light-in-flight, which enables the visualization and characterization of the propagation of light, is one such example. The capturing of light-in-flight, sometimes referred to as transient imaging, was first performed with intensity gating [6], and then holographic gating [7], which selected photons at a specific time by a shutter or an interference scheme. Outstanding works with various applications have been developed based on the time gating principle [8–11]. More recently, advances in optical devices have enabled the capture of light-in-flight by recording the arrival of photons in a continuous manner with their corresponding arrival time; such devices include streak cameras [12,13], photonic mixer devices [14,15], and single-photon avalanche diode (SPAD) arrays [16,17].
Compared to everyday photography, imaging light-in-flight is particularly interesting because light is both the medium carrying information to the camera in the form of scattered photons and the object to be imaged itself. In such a scenario, the speed of light cannot be treated as an infinite number, as is otherwise true for everyday photography. The recording of such events will be significantly observer-dependent and will exhibit spatiotemporal distortions [18–21]. In the context of this work, “relativistic effects” refer to such spatiotemporal distortions.
To explain this point further, see two examples in Fig. 1. As shown in Fig. 1(a), two consecutive frames are taken by a camera, recording a car moving from point A at time 0 to point B within a time interval . It is worth noting that the actual moments when the camera records these two frames are at and , and the time interval between these two frames would be rather than . Nevertheless, because the speed of light can be treated as infinite compared to the speed of the car, the camera-measured can be treated as , which leads to observer-independent results. On the contrary, when a light pulse is travelling from A to B, as shown in Fig. 1(b), the camera-measured time interval can no longer be treated as , because is of the same scale as . The recorded information of such events is significantly observer-dependent and contains spatiotemporal distortions.
Sign up for Photonics Research TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now

Figure 1.Schematics of difference between imaging (a) a moving car and (b) a flying light pulse. stands for the time during which the object moves from position A to position B, and and denote the time of flight for the scattered photons to propagate to the camera from positions A and B, respectively.
In order to retrieve observer-independent information of light-in-flight, this relativistic effect (finite light speed of ) needs to be compensated for to determine the accurate time when the event actually happens rather than the arrival time at which it is detected by the camera. In holographic light-in-flight imaging, this compensation can be performed using a graphical method based on the ellipsoids of the holodiagram [22]. A straightforward approach is to simply remove the time-of-flight, i.e., and in Fig. 1, from each measurement corresponding to its spatial location, which in turn is measured point by point three-dimensionally by a distance meter before performing the actual imaging of light-in-flight [23].
Interestingly, we note that the observer-dependent data of light-in-flight contains more information than the aforementioned works have exploited. Here, we demonstrate that the relativistic effects can be compensated during the imaging of light-in-flight by further exploiting the (, , ) data recorded by a SPAD camera via a strictly built optical model and a computation layer to obtain non-distorted time of a flying light pulse, without any additional measurements or auxiliary ranging equipment. Simultaneously, the information of an extra dimension, i.e., dimension, can be retrieved, leading to the observer-independent space-time (, , , ) four-dimensional (4D) reconstruction of light-in-flight. The proposed scheme enables the accurate visualization of transient optical phenomena such as light scattering or interaction with materials.
2. EXPERIMENTAL SETUP
Our experimental system is illustrated in Fig. 2. A 637 nm pulsed laser (PicoQuant, LDH-P-635, wavelength 636–638 nm, repetition rate 20 MHz, 1.2 mW, pulse width 68 ps) emits laser pulses with 68 ps pulse duration at 20 MHz repetition rate. The pulses propagate across the field of view (FOV) of a SPAD camera, which consists of a SPAD array (Photon Force PF32, time resolution 55 ps, pixel resolution , pixel pitch 50 μm, fill factor 1.5%, operating at 5000 frames per second) and a camera lens (Thorlabs, MVL4WA, effective focal length 3.5 mm, F/1.4, CS mount). The camera is synchronized to the pulsed laser and contains a array of SPAD detectors, each of which operates in time-correlated single photon counting (TCSPC) mode individually, and records the temporal information of a laser pulse by sensing one of the scattered photons from the laser pulse. By accumulating data over multiple detection frames, each pixel obtains a temporal histogram of scattered photons whose total number represents the scattering intensity of the laser pulse at the corresponding spatial location, and the histogram shape indicates the arrival time of the scattered photons. Combining the histogram data recorded by the pixels of the SPAD camera, the projection of the light-in-flight on the plane inside the FOV of SPAD camera can be reconstructed [24].

Figure 2.Experimental system for light-in-flight measurement and data processing. (a) In the experiment, the pulsed laser and the SPAD camera are synchronized via a trigger generator. Placed at , a 636 nm pulsed laser emits pulses across the field of view of the SPAD camera. The SPAD camera, with a lens of 3.5 mm focal length, is located at . The object focal plane of the camera is the plane at , having a field of view of . The SPAD camera collects the scattered photons from the propagating laser pulses and records a histogram at each pixel using TCSPC mode. (b) The raw data of the histograms is fitted with a Gaussian distribution. Histograms with widths too large or too small are discarded (pixels 1 and 2). Malfunctioning pixels with abnormally large counts are also discarded (pixel 4), leaving only effective pixels (pixel 3). (c) The arrival time of the scattered photons in the effective pixels is determined as the peak position of the fitted Gaussian distribution, and a pixel versus arrival time can be obtained. Consequently, the projection of the light path on the plane, as well as the arrival times along the path is obtained, forming the (, , ) three-dimensional data of light-in-flight.
The accurate estimation for the arrival time of the scattered photons at each pixel of the SPAD camera is important for the light-in-flight reconstruction. Gaussian fitting is performed on the raw histogram data at each pixel to suppress the statistical random noise of photon counting. Under the assumption that the background light and dark count of the SPAD camera, whose temporal distribution is quasi-flat, add a bias to the histogram, a constant term is added to the Gaussian polynomial during the fitting in order to improve the estimation accuracy of the arrival time. During the data processing, if the width of a fitted Gaussian curve is much larger or smaller than the expected width, the corresponding pixel is assumed to be malfunctioning or extremely noisy, and is therefore discarded. Furthermore, the systematic overall delay, which is mainly caused by electronic jitter of the related devices and is different for each pixel, is compensated by a temporal offset for each effective Gaussian curve. The offset for the corresponding pixel is determined as the temporal difference between the measured peak position of the corresponding pixel and its theoretical value, which is measured when the camera is uniformly illuminated with a collimated and expanded pulsed laser.
Once the raw histogram data is Gaussian fitted and the temporal-delay is compensated at each pixel, the peak of its histogram is determined, which represents the arrival time of the light recorded at the corresponding pixel. As shown in Fig. 2, the path of a laser pulse propagating through the FOV of the camera is reconstructed as its projection on the plane, and the arrival time along that path is estimated, forming the (, , ) three-dimensional (3D) data of light-in-flight.
3. OPTICAL MODEL AND COMPUTATIONAL LAYER
In order to transfer the arrival time distorted by the relativistic effects to the accurate time at which the light pulse actually is at a given position, an optical model is built as shown in Fig. 3. For simplicity, the computation is based on the assumption that light propagates in air, though it works for any uniform and homogenous medium in which the light propagates in a straight line at a fixed velocity. The moment at which a light pulse enters the FOV of the camera is defined as , which makes the propagation time from the entering position to where it is now. will be referred as propagation time hereafter. The plane containing the entering point is defined as the reference plane. If a light pulse propagates from B towards C, and its arrival time at an arbitrary point D is recorded, this arrival time will correspond to a timespan of light traveling from B to D and then scattering to A (while the propagation time of the light pulse corresponds to the time during which light travels from B to D). These times, and , satisfy the equation where is the distance from B to the camera A and can be calculated using the recorded arrival time of the pixel corresponding to B. is the angle between AB and AF, and can be calculated using the value of and the known camera FOV. The second term of Eq. (1) represents the time interval that light propagates from D to A, in which the propagation angle is defined as the angle between the light path BD and its projection BE on reference plane. and are related via the following equation: where is the length of BE and can be calculated using the value of and the known camera FOV. By substituting Eq. (2) into Eq. (1), the arrival time and the propagation angle form a one-to-one relationship. BG is the projection of BC on RP, and can be recorded by the SPAD camera, forming 32 photon-counting histograms. The arrival time of the th histogram can be used to yield a propagation angle . However, due to noise contained in the recorded data, 32 yields 32 different , which should be identical theoretically. The optimal estimation of the propagation angle is then calculated as the value having the minimum root-mean-square error (RMSE), with 32 resulting .

Figure 3.Optical model for the computation of propagation time . and are the angles of and , respectively. and are the lengths of BA and BE, respectively. BE is the projection of BD on the reference plane (RP).
Using the calculated propagation angle , the propagation time can be determined for each recorded (, ). Furthermore, the information of the corresponding (, ) can also be retrieved via the knowledge of . Therefore, the observer-independent 4D (, , , ) information of light-in-flight is reconstructed.
The procedure to reconstruct multiple light paths is illustrated in Fig. 4. The light path from the laser emitting point to Mirror 1 is denoted as LP1, and the consecutive paths from Mirror 1 to Mirror 2 and from Mirror 2 to the exit are denoted as LP2 and LP3, respectively. The light paths can be reconstructed one after another sequentially with their corresponding propagation angles and reference planes. As shown in Fig. 4(a), the plane at , containing the laser emitting point, is defined as the reference plane 1 (RP1). The projection of LP1 in RP1, denoted as PP1, with its spatiotemporal information, is recorded by the SPAD camera. Using the propagation angle estimation procedure just described, can be estimated, and the observer-independent 4D (, , , ) information of LP1 is reconstructed with the starting point of LP1 (S1) and ending point of LP1 (E1) determined. As shown in Fig. 4(b), the starting point S2 of LP2 is determined as E1 and the reference plane 2 (RP2) is defined as the plane containing E1. In the same manner, and the observer-independent 4D information of LP2 are calculated. Using the ending point E2 of LP2 as the starting point of LP3 (S3), the reference plane 3 (RP3) of LP3 is defined as shown in Fig. 4(c). Similarly, and the 4D information of LP3 can be retrieved. The full evolution of light-in-flight in the FOV of the camera is then reconstructed in (, , , ).

Figure 4.Reconstruction procedure for consecutive light paths. (a) For light path 1 (LP1), the reference plane (RP1) is the plane containing the starting point 1 (S1). The spatial location of the projection (PP1), propagation angle, and ending point (E1) of LP1 are determined using the proposed geometric model. (b) E1 is used as S2 for the reconstruction of LP2, and RP2 is the plane containing S2. The equation of LP2 and the position of E2 can be obtained. (c) In the same manner, LP3 and E3 are determined with RP3.
4. RESULTS
A. Propagation Angle Estimation
An experiment is performed using the setup in Fig. 2 to evaluate the estimation accuracy of before performing the 4D light-in-flight reconstruction. In this experiment, laser pulses propagate horizontally, i.e., parallel to the direction, through the center of the camera FOV with a propagation angle gradually adjusted from to 10° with a 0.5° step (positive is towards the camera and negative is away from the camera). For each angle, measurements are acquired for 200 s with an exposure time of 200 μs for each detection frame. The averaged photon count of the SPAD camera is photon per pulse per pixel, which satisfies the photon-starved condition required for TCSPC mode. A total of 20 measurements are performed, and the resulting is the averaged value of these 20 measurements.
During the experiment, the plane containing the laser-emitting point is selected to be the reference plane. The histogram of any malfunctioning pixels is discarded and its corresponding arrival time is determined as the linear interpolated value of the arrival times from the neighboring pixels. It is worth mentioning that theoretically the propagation angle can be estimated using one or several arrival times [25,26]. However, due to the discrete nature of the temporal measuring with the SPAD camera and the noise recorded during a practical experiment, the greater number of are involved in the calculation, the more accurate the estimated will be. Figure 5(a) shows the angular errors of the estimated propagation angle to the ground truth when different numbers of are used in the estimation. As one would expect, 2 yield the largest mean error, which is 3.03°, and 32 give the smallest, which is 0.15°.

Figure 5.Experimental results of the propagation angle estimation. (a) Angle error resulting from using different numbers of for the estimation of . (b) Calculated propagation time with respect to arrival time at different propagation angles. (c) The variation of measured full width at half maximum for a laser pulse with respect to its propagation angle , caused by the relativistic effects.
Figure 5(b) shows the relationship between the measured arrival time and the actual propagation time under the influence of different propagation angles. Figure 5(c) demonstrates how relativistic effects distort the measured pulse width of a laser pulse. The variation of the pulse width is indistinguishable when the propagation angle is between and 6° due to the temporal discretization of the SPAD camera, whose time bin is 55 ps. Nevertheless, the experimental results are in good agreement with the theoretical curve. Furthermore, the results yield a measured full width at half maximum (FWHM) pulse width of 65 ps after deconvolving the systematic impulse respond function from the Gaussian fitted recording data, which is close to the 68 ps pulse width given by the laser manual.
B. Light-in-Flight Reconstruction
A second experiment is then performed to reconstruct light-in-flight in a 3D space of , where the pulses are emitted from a laser and reflected by two mirrors to generate three consecutive light paths across the FOV of the SPAD camera. In the experiment, there is a 40 mm distance between the FOV and any optical elements (e.g., the laser source and the mirrors) in order to avoid spurious scattering of light into the measurement. The emitting point of the pulsed laser is selected as (0,0,0) of the coordinate for calculation. The object focal plane of the SPAD camera is determined to be the plane at . Using the same configuration as before, the SPAD camera records the observer-dependent (, , ) information of the three light paths inside the FOV. The reconstruction of light-in-flight is performed by sequentially determining the light paths from laser emitting point to Mirror 1, to Mirror 2, and then to the exiting point. The reconstruction procedure is given in previous section.
Figure 6(a) shows the reconstructed propagation of the laser pulse in the space, which is overlaid onto a photograph of the experimental setup. The instantaneous positions (, , ) of the laser pulse in path are reconstructed with an accuracy of 1.75 mm RMSE with respect to the ground truth in a 3D space of . The propagation times of the light pulse are estimated with an accuracy of 3.84 ps, which is determined as the difference between the ground truth and the estimated propagation times , calculated using Eq. (2). The 3.84 ps accuracy is dramatically smaller compared to the 55 ps time resolution of the SPAD camera. The reason for this improvement lies in the fact that the inaccuracy caused by the discrete temporal measurements and the experimental noise is suppressed during the estimation of each propagation angle , which involves 32 measured arrival times rather than one. The full evolution of the laser pulse propagation can be found in Visualization 1. The FWHM of the propagating laser pulse, which is the deconvolved result of the systematic impulse respond function from the Gaussian fitted recording data, is approximately 70 ps, consistent with the specification of the laser.

Figure 6.Experimental 4D reconstruction of light-in-flight. (a) A reconstruction of a laser pulse reflected by two mirrors is demonstrated. The RMSEs of the reconstruction (red line) to the ground truth (dashed line) in position and time are 1.75 mm and 3.84 ps, respectively. (b) The difference between the calculated propagation time (red line) and measured arrival time (blue line) at each recorded frame. The propagation time is in good agreement with the ground truth (dashed line), demonstrating a feasible compensation for the relativistic effects via the proposed scheme.
Figure 6(b) shows the difference between the calculated propagation time (red line) and the measured arrival time (blue line) at each recorded frame (55 ps time interval) of the SPAD camera, where the arrival time has been biased so that it starts at 0 ps at the first frame. The measured arrival time has been successfully compensated to be the observer-independent propagation time , which is in a good agreement with the ground truth (dashed line). The temporal RMSE to the ground truth is significantly improved from 174.80 to 3.84 ps.
5. DISCUSSION AND CONCLUSION
The estimation of propagation angle is crucial to the light-in-flight reconstruction in this work, and we have demonstrated accurate estimations of propagation angle from to 10°. Theoretically, the proposed approach can be used to estimate any angle between , not including . Practically, there are two major aspects, noise and diffusion, to consider when measuring a steep angle. On the noise aspect, the reconstruction will be even more accurate with a steeper angle because the relativistic effect is more obvious and the difference between the measured data of two adjacent pixels is easier to be recorded above the noise. An accurate reconstruction at a smaller angle is more difficult to achieve because the milder distortion can be easily drowned in the systematic noise. From the diffusion aspect, a forward steep angle increases the signal-to-noise ratio of the measured data while a backward steep angle decreases it.
The proposed method has the assumption that the light-in-flight to be reconstructed happens in a uniform and homogenous medium, where light propagates in a straight line at a fixed velocity. However, it is also possible to reconstruct self-bending light beams, such as Airy beams [27], in a differential manner. That is, the self-bending light path of the Airy beam propagation viewed as a combination of many tiny straight paths, each of which can then be reconstructed individually by the proposed method.
The position error for a reconstructed light path is mainly determined by the estimation accuracy of the propagation angle and the recorded light path projection on a SPAD camera. The estimation accuracy of the propagation angle can be further improved by taking more measurements or by using a camera with lower noise. The accuracy of the recorded light path projection is limited by the pixel resolution () and fill factor (1.5%) of the SPAD camera. In particular, when a light pulse is propagating in a quasi-horizontal direction, the small variation in the direction cannot be spatially resolved by the SPAD camera, which accumulates as the light pulse propagates and degrades the resulting accuracy of the reconstruction. A newly developed SPAD camera with a pixel resolution and 61% fill factor [28] will improve the reconstruction accuracy of the proposed light-in-flight imaging system. A backside-illuminated multi-collection-gate silicon sensor can also be used in light-in-flight imaging [29] to provide a higher fill factor, larger photoreceptive area, and higher spatial resolution, with a temporal resolution currently of 10 ns, but its sensitivity is not as good as a SPAD camera. However, the ultimate limit for temporal resolution of these cameras implies that in the future, sub-ns temporal resolution could be achievable, thus allowing precise light-in-flight measurements with just one single laser shot, as shown in the proof-of-concept work by Etoh et al. [29].
In summary, we have proposed a computational imaging scheme to achieve the reconstruction of light-in-flight in observer-independent 4D (, , , ) by recording the scattered photons of the light propagation with a SPAD camera and compensating the relativistic effects via an optical model-based computation layer. The relativistic effects in this context refer to the spatiotemporal distortion caused by the fact that the speed of light needs to be treated as a finite number in certain scenarios such as transient imaging. The estimation of the light propagation angle , which is crucial to the 4D light-in-flight reconstruction, has a mean error of 0.15° for the range from to 10°. In the reconstruction of the light-in-flight in a 3D space of , the temporal accuracy is improved from 174.80 ps of the distorted arrival time to 3.84 ps of the compensated propagation time. The spatial accuracy of the reconstruction is 1.75 mm, which is better than both the 8 mm transverse spatial resolution determined by the optical setup of the system and the 16.5 mm longitude spatial resolution determined by the 55 ps time resolution of the SPAD camera. The improvement is mainly achieved by the accurate estimation of the propagation angle , where the random-natured noise and the inaccuracy of discrete measurement are suppressed by estimation involving multiple measurements. The accurately estimated propagation angle can be further exploited to correct other distorted measurements.
The proposed 4D imaging scheme is applicable to the reconstruction of light-in-flight for other circumstances, such as light traveling inside a cavity or interacting with other materials. This work provides the ability to expand the recording and measuring of repeatable ultra-fast events with extremely low scattering from 3D to 4D. It can also be applied to observe optical phenomena which pose a difficulty for other imaging schemes, e.g., the behavior of light in micro- or nanostructures and the interaction between light and matter.