Overcoming the diffraction barrier in long-range optical imaging is recognized as a critical challenge for space situational awareness and terrestrial remote sensing. This study presents a super-resolution imaging method based on reflective tomography LiDAR (RTL), breaking through the traditional optical diffraction limit to achieve 2 cm resolution imaging at a distance of 10.38 km. Aiming at challenges such as atmospheric turbulence, diffraction limits, and data sparsity in long-range complex target imaging, the study proposes the applicable methods of the nonlocal means (NLM) algorithm, combined with a self-developed RTL system to solve the problem of high-precision reconstruction of multi-angle projection data. Experimental results show that the system achieves a reconstruction resolution for complex targets (NUDT WordArt model) that is better than 2 cm, which is 2.5 times higher than the 5 cm diffraction limit of the traditional 1064 nm laser optical system. In sparse data scenarios, the NLM algorithm outperforms traditional algorithms in metrics such as information entropy (IE) and structural similarity (SSIM) by suppressing artifacts and maintaining structural integrity. This study presents the first demonstration of centimeter-level tomographic imaging for complex targets at near-ground distances exceeding 10 km, providing a new paradigm for fields such as space debris monitoring and remote target recognition.
【AIGC One Sentence Reading】:This study introduces a super-resolution imaging method using reflective tomography LiDAR, achieving 2 cm resolution at 10.38 km. It overcomes diffraction limits and atmospheric challenges, utilizing the NLM algorithm for high-precision reconstruction, enabling space debris monitoring and remote target recognition.
【AIGC Short Abstract】:This study introduces a super-resolution imaging method using reflective tomography LiDAR (RTL), achieving 2 cm resolution at 10.38 km, overcoming traditional optical diffraction limits. Addressing challenges like atmospheric turbulence and data sparsity, it combines the nonlocal means algorithm with a custom RTL system for high-precision reconstruction. Results show superior performance in sparse data, enhancing structural integrity and information entropy, enabling centimeter-level imaging for complex targets.
Note: This section is automatically generated by AI . The website and platform operators shall not be liable for any commercial or legal consequences arising from your use of AI generated content on this website. Please be aware of this.
With the development of space technology, high-resolution (HR) images are being desired in many key areas, such as astronomy observation and space debris monitoring[1–3]. The optical telescope has always been the most commonly used tool to acquire HR images. Suffering from the diffraction of light, the resolution of a telescope is theoretically limited by the diameter D of the mirror aperture and light wavelength , in some form like the Rayleigh criterion [4,5]. Consequently, the main focus of improving the image resolution is on increasing the aperture of the telescope. However, the difficulty and cost for manufacturing the large diameter telescope increase exponentially, and the image resolution is also disturbed by the atmospheric turbulence for the ground-based telescope[6–8]. While adaptive optics (AO) systems can partially mitigate atmospheric turbulence effects in ground-based instruments, the prohibitive cost of critical components such as deformable mirrors in conventional AO architectures has largely confined their application to large telescope installations, lacking portability and also failing to address the fundamental resolution ceiling imposed by diffraction limits. This impasse has spurred the development of synthetic aperture LiDAR (SAL), which achieves resolution enhancement through synthetic phase-coherent aperture synthesis[9]. Recent breakthroughs have demonstrated remarkable SAL performance, including Wang et al.’s (2022) 3 cm (range) × 1 cm (azimuth) resolution at 4.6 km distance for wide-swath imaging[10], and Wu et al.’s (2024) prototype achieving 15.6 mm range resolution and 1.7 mm azimuth resolution at 100 km distance[11]. Yet, SAL’s susceptibility to atmospheric phase distortions and stringent phase stability requirements severely limit its practicality for long-range, near-ground imaging requirements[12,13].
Reflective tomography LiDAR (RTL) circumvents these limitations by applying computed tomography (CT) principles to laser ranging. Unlike conventional imaging that records intensity distributions or SAL’s phase-sensitive interferometry, RTL employs angularly diverse projections of time-resolved reflectance signals to reconstruct target geometries through inverse Radon transform[14]. This approach fundamentally decouples spatial resolution from the diffraction limit, instead leveraging the statistical diversity of multi-angle backscatter signals associated with target geometric features.
The development of RTL imaging traces its origins to pioneering experiments by Parker et al. in 1988, who first demonstrated the feasibility of applying tomographic principles to laser imaging. Their reconstruction of a conical target at 10 m revealed a dimensional estimation error of 12.6 cm, validating the foundational concept[14]. Subsequent advancements by Lasché et al. in 1999 marked a critical milestone, resolving a 5-cm-diameter cylinder separated by 81 cm at 1 km range and reconstructing the approximate contour of a 1.64-m-wide satellite model[15]. By 2001, the same group extended this methodology to orbital satellites, albeit with limitations: their work focused on resolving two retroreflectors spaced 6 m apart on a satellite hundreds of kilometers away, neglecting the main satellite body. Incomplete angular projections led to significant reconstruction artifacts, yet achieved meter-scale resolution[16]. A notable leap occurred in 2010 when Murray et al. reported RTL imaging of a 1-m-wide composite target (featuring retroreflectors and diffuse cylinders) at 22.4 km range, claiming a resolution of 15 cm, which was tenfold superior to conventional telescope imaging[17]. Laboratory-scale progress followed in 2012, as Henriksson et al. reconstructed detailed centimeter-resolved images of a ship model at 53 m under controlled conditions, though quantitative resolution metrics were omitted[18]. Most recently in 2021, researchers at the National University of Defense Technology proposed a LiDAR system to detect the simulated debris barycenter and obtained 2 cm accuracy of the barycenter estimation with a distance of 1 km, albeit slightly below the system’s theoretical diffraction limit (1.3 cm)[19].
Despite these incremental advances, critical gaps persist in the field. Existing demonstrations predominantly alternate between two extremes: high-detail imaging of complex targets at short ranges () or coarse reconstructions of simple geometric targets at extended distances. No prior work has systematically addressed kilometer-scale and high-fidelity imaging of complex targets while rigorously exploring the resolution limits through synergistic hardware-algorithmic optimizations. This underscores the necessity of advancing RTL methodologies to bridge the gap between laboratory-scale precision and long-range, complexity-laden field conditions, particularly for conditions demanding both sub-diffraction resolution and robustness against real-world atmospheric and target complexity.
In order to break through the image resolution of the diffraction limit, we proposed a new tomography LiDAR, which has achieved a resolution of 2 cm at a range of 10.38 km. The subsequent sections of this article are structured as follows. Section 2 introduces the fundamental methodology and instrumentation of RTL, covering three projection image reconstruction algorithms and the introduction of the LiDAR system we built. Section 3 presents the experimental results, including validation tests of system performance, evaluation of reconstructed image resolution, and comparative analysis of algorithm efficacy under sparse data conditions. The conclusion is summarized afterward in Sec. 4.
2. Method and Experimental Setup
2.1. Methodology
RTL employs collimated laser pulses instead of X-rays to probe targets through multi-angle illumination. As illustrated in Fig. 1, laser pulses are directed toward the object at an angle , with the entire illuminated surface contributing to the backscattered signal. Depth-dependent modulation of the temporal pulse profile arises from variations in surface topography: distinct regions along the laser path introduce differential time delays proportional to their relative depths.
To reconstruct images from time-resolved laser echoes, RTL primarily employs two algorithms: filtered back-projection (FBP)[20] and algebraic reconstruction technique (ART)[21]. The FBP algorithm combines inverse Radon transform with frequency-domain filtering for efficient contour reconstruction[22–24], but they exhibit artifacts under sparse sampling. The ART addresses angular sparsity by solving an underdetermined linear system through iterative projection updates[21,25–27], minimizing residuals between measured and simulated data. To further suppress artifacts and enhance image quality, the nonlocal means (NLM) algorithm[28,29], which is a denoising method leveraging image self-similarity, is integrated into the ART sparse model to enhance image quality.
2.1.1. FBP algorithm
For a target described by the reflectivity function in a 2D plane, the projection at angle is the line integral of along a ray defined by , where the range is related to the time-of-flight (ToF) of photons by multiplying ToF and . This constitutes the Radon transform of :
Let denote the filtered projection obtained by convolving with the ramp filter :
The reconstructed image is then the back-projection of all filtered projections:
In practical observations, the incident angle is discretely sampled with an angular interval . The reconstructed image is expressed as[30]
2.1.2. Basic principles of the ART reconstruction method
The ART operates as an iterative solver for large-scale linear systems derived from discretized tomographic projection models. In RTL applications, the target’s spatial distribution is discretized into an pixel grid, where each pixel maintains a constant grayscale value. This discretization converts the continuous image function into an N-dimensional vector , representing pixel intensities. Correspondingly, the projection data acquired through ray paths form a measurement vector , where each corresponds to the line integral along the th ray path. Thus, the essence of iterative reconstruction is to solve the linear equations,
The imaging physics establishes a linear relationship between the pixel values and projections through a weighting matrix , where the weight factor quantifies the contribution of the th pixel to the th projection. Geometrically, typically corresponds to the intersection length between the th ray and the th pixel. The equation can also be expressed by a matrix, which is fundamentally underdetermined () in practical LiDAR configurations due to limited projection angles and sparse sampling.
In practical contour reconstruction tasks, direct matrix inversion proves infeasible. The ART is generally used as a fundamental algorithm for image reconstruction. The ART employs sequential orthogonal projections onto hyperplanes defined by each equation in the system, implementing a relaxed Kaczmarz algorithm. The iterative scheme progresses as follows: Starting from an initial guess (typically zero-initialized), each iteration cycles through the projection equations while updating pixel values. For the th iteration processing the th projection, the update rule of the th pixel becomes[31]where serves as a relaxation factor. This correction redistributes the residual error across pixels proportionally to their respective weights . A complete iteration cycle processes all projections sequentially, with the solution progressively approaching the feasible region where all hyperplanes intersect. Convergence criteria typically terminate iterations when the residual norm falls below a specified threshold or after fixed iteration counts.
2.1.3. Sparse ART with nonlocal means model
In reflective tomography, incomplete or sparse projection data lead to ill-posed reconstruction problems. The ART sparse model addresses this by finding a sparse transformation matrix meeting the condition of , where is a sparse matrix. The reconstruction problem then becomes where the L1-norm turns the problem into a convex optimization problem. However, ART sparse reconstructions still suffer from noise and artifacts. To improve reconstruction quality, NLM regularization is introduced. It utilizes the relevance between pixels to remove artifacts. The NLM weight between pixels and is computed as where is the Gaussian-weighted Euclidean distance between local patches centered at and , and controls the decay. The NLM-regularized reconstruction proceeds as follows: Initial reconstruction: ART iteratively estimates from sparse projections, enforcing nonnegativity () and relaxation ().NLM denoising: After ART convergence, the NLM suppresses artifacts by means of the image nonlocal self-similarity. Each pixel is updated as a weighted sum of its neighbors, where weights depend on patch similarity.
2.2. System Development
In order to realize the long-range detection for super-resolution tomography imaging, four predominant problems should be considered when developing systems: 1) a high-power, ultrashort laser pulse, which would be used for long-range detection and obtaining a theoretical range resolution; 2) a special optical system, which can ensure the fidelity of the emitted laser shape, a suitable divergence angle, and a high optical efficiency to obtain high signal-to-noise ratio (SNR) signals; 3) a receiver, which has enough bandwidth to distinguish the detailed echo waveforms; 4) a trigger, which can offer an accurate data-acquisition time-sequence to receive accurate range-temporal data.
Considering the above problems, we developed a novel system, which consists of a newly designed actively mode-locked picosecond laser; a homemade high-wavefront optical system, which is composed of an expander and a telescope of 260 mm diameter; a receiver consisting of a 7.5 GHz bandwidth InGaAs-based avalanche photodiode (APD) and a 50 GB samples per second (GSPS) data acquirer; and a homemade ultra-high-speed InGaAs-based PIN cube serving as the trigger, which offers an accurate time reference (with less than 100 ps time jitter) for data acquisition instead of the -switched synchronizing signal from the laser. Several key parameters of this system are listed in Table 1. Based on the proposed system, a best range resolution of better than 2.3 cm was demonstrated. The reconstructed image of a complex target (NUDT WordArt model) of decimeter level dimension was obtained with a resolution of better than 2 cm over 10 km. In theory, the image resolution of the receiving telescope with the 1064 nm laser in the diffraction limit with the Rayleigh criterion is about 5 cm. To our best knowledge, this is the first time that a tomographic image with this near-ground distance has been realized with a resolution of 2 cm, which is considerably beyond the diffraction limit of the employed optical system.
A schematic of the proposed RTL system is shown in Fig. 2. The dashed lines delineate the four main parts of this LiDAR system: an actively mode-locked picosecond laser working at 1064 nm, a high-wavefront optical system (expander and telescope), a homemade InGaAs-based PIN cube ( bandwidth) as the trigger offering an accurate data-acquisition triggering signal, and a wide-bandwidth receiver. The picosecond laser incorporates an actively mode-locked, -switched Nd:YAG master oscillator (MO) and a two-pass amplifier. The MO is mainly applied to generate ultrashort laser pulses and is based on the Nd:YAG1, M1, M2, and M3. The acousto-optic modulator (AOM) is used as a mode-locker. An electro-optic modulator (EOM) designated EOM2 serves as the -switch for formatting the short laser pulse. EOM1 is used for cavity dumping to obtain ultrashort laser pulses with P1. A pinhole in MO is used for selecting the TEM00 mode. When -switched by a maximum pulse intensity formed by the mode-locker, EOM1 works with P1 to create cavity dumping to obtain picosecond laser pulses with a maximum pulse intensity of approximately 0.80 mJ and a pulse reception rate of 15 Hz. The two-pass amplifier can change the energy of the pulsed picosecond laser from the MO into a higher level, which is preferable for long-distance detection. The negative lens (NL) provides a divergent beam filling the total aperture on the second pass through the Nd:YAG2 crystal. P2 is used to provide additional filtration of other polarized light. P3 reflects the ultrashort laser from MO into the two-pass amplifier, and 1/4WP converts vertical input polarization into a horizontal one with M4. A small tunable telescope working as a positive lens compensates the beam divergence to approximately 0.7 mrad and corrects the transverse mode of the Gaussian beam.
Figure 2.Schematic of the RTL system. CM, concave mirror; EOM, electro-optic modulator; AOM, acousto-optic modulator; P, polarizer; M, mirror; NL, negative lens; Nd:YAG, neodymium-doped yttrium-aluminum-garnet laser; WP, wave plate; ND, neutral density filter; IPC, industrial personal computer; APD, avalanche photodiode; MMF, multimode fiber.
Regarding the expander and telescope optical system, after the tunable telescope, the output pulse laser is reflected twice by M5 and M6, and it is attenuated by a neutral density filter (ND), then sent into the expander for compression of the divergent angle 3.5 times. The signal scattered by the target is collected by a Cassegrain telescope, which consists of a 260 mm primary mirror and a 70 mm secondary mirror with a focal length of 1 m, and then sent to the receiver. Both the expander and Cassegrain telescope have a high-accuracy wavefront to ensure high quality of the expanded laser beam and an accurate echo signal.
The receiver consists of an InGaAs-based APD with a bandwidth of for echo light detection, and a data acquirer with 50 GSPS. The echo collected by the Cassegrain telescope is transferred to the APD with multimode fiber (MMF) of core diameter 62.5 µm. The falling edge of the trigger pulse triggers the extra-high-speed digitizer for data acquisition. The industrial personal computer (IPC) serves for data processing to calculate the tomographic image from the data acquired by the digitizer.
The optimized InGaAs-based PIN cube serving as a trigger offers an accurate time reference for data acquisition instead of a -switched synchronizing signal. The physical nature beneath the choice is that the time jitter of the PIN cube () is about 1 magnitude less than that of -switching (1 ns), which guarantees centimeter-level range accuracy, considering the LiDAR range error equation , where is the speed of light in near-ground atmosphere, and is the jitter time. Thus, a high-speed InGaAs-based PIN cube with an optimized circuit (shown in Fig. 1) is proposed as a trigger for data acquisition.
2.3. Experimental Setup
The field validation of the system was conducted on November 15, 2022, establishing a 10.38 km baseline between the transmitter-receiver station at Huanan City and the target deployment site at Zipeng Hill. As illustrated in Fig. 3, the target is a NUDT WordArt model comprising a geometrically complex thin-film structure, mounted on an elevated tower with controlled rotational freedom. To enable full-projection sampling, the target assembly was rotated about its central axis through 180° with 1° angular increments, maintaining a persistent 30° inclination relative to the center axis. The transceiver system, housed in an enclosure, executed angularly resolved measurements through sequential laser pulse transmission and broad bandwidth distinguishment of the detailed echo waveforms. The data of each 1° step was averaged with 15 laser pulses to mitigate the influence of atmospheric turbulence and system noise.
Figure 3.Layout of the experimental setup. (a) Image of the test tower with a 100-mm-aperture telescope. (b) The target is fixed at an angle of 30° with the direction of light rotating on a rotary table. (c) The RTL system.
Before using the system to image the target model, the system’s performance was empirically validated through two field experiments, including a range resolution test and analysis of waveform broadening caused by target inclination. The setup of the experimental apparatus is consistent with that described in Sec. 2.3, with only the target model replaced by adjustable retroreflective plates to obtain more precise distance and angle information.
First, to quantitatively evaluate the range resolution limit of the RTL system, a controlled dual-target discrimination experiment was implemented. As illustrated in Fig. 4(a), two reflective plates are aligned along the laser propagation direction, with their surface normal parallel to the beam path. The separation between the plates is adjustable. Fig. 4(c) shows the echo waveforms of two plates at spacing distances of 2.3, 2.5, 2.7, and 3.5 cm. It can be seen that it is capable of distinguishing the two peaks of the echo waveform at a spacing distance of 2.3 cm. The measured longitudinal resolution establishes the fundamental sampling capability of our system along the line-of-sight dimension. In tomographic reconstruction, this axial resolution directly determines the minimum distinguishable thickness of reconstructed voxels. By achieving sub-diffraction-limit sampling in the range dimension, we enable the system to resolve finer structural details than would be possible through direct optical imaging at this wavelength.
Figure 4.(a) Image of two plates fixed on a translation stage. (b) Image of a plate () fixed on a precise rotating platform. (c) Echo waveforms of two plates with different spacing distances, i.e., 23, 25, 27, and 35 mm, over 10 km. (d) Echo waveforms of one plate () with different rotation angles, i.e., 0°, 10°, 20°, and 30°, over 10 km.
A comparison of echo waveforms of a plate at different angles of incident light was conducted. As shown in Fig. 4(b), the angle of the reflector relative to the laser transmission direction can be adjusted. Figure 4(d) shows the echo waveforms of the plate with incident angles of 0°, 10°, 20°, and 30°; the width at 0° is about 0.2 ns. The waveforms were broadened with the increasing of angles. The expanded width at 30° is about 0.94 ns with three peaks. This waveform distortion profile directly corresponds to the projected axial geometric variations. Unlike conventional LiDAR ranging, where waveform broadening due to surface tilt is typically corrected to improve distance measurement accuracy, RTL intentionally exploits this effect. For tilted surfaces, different lateral positions of the laser spot illuminate varying surface heights, resulting in distinct ToF returns that broaden the echo waveform. This broadening directly encodes the target’s geometric profile along the line of sight, effectively converting spatial variations into resolvable temporal features. The observed waveform distortion in Fig. 4(d)—where increasing tilt angles produce progressively broader echoes with multiple peaks—demonstrates the system’s ability to resolve fine axial geometric variations. This capability is essential for RTL because it enables the reconstruction of complex 3D structures from time-resolved echoes, even at long ranges where direct imaging would be diffraction-limited.
3.2. Image Reconstruction and Assessment
The angularly resolved time-intensity profiles of the averaged, scaled raw data acquired across 180 projection angles are displayed in Fig. 5(a), exhibiting waveform time-resolved intensity variations correlated with target geometry. Applying the FBP algorithm in Sec. 2.1 to the projection data, we obtain the reconstructed tomogram shown in Fig. 5(b), which accurately recovers the target’s macroscopic features. Measured dimensions of the reconstructed image indicate a lateral width of approximately 35 cm and axial length of approximately 45 cm, where the elongation along the rotational axis arises from the 30° mounting angle relative to the axis of rotation. These dimensions align with the physical target proportions shown in Fig. 5(c), confirming geometric preservation acquired from image reconstruction. Critical structural details, including corner vertices and edges, are visibly resolved in the reconstruction despite kilometer-scale propagation. The demonstrated length-width proportionality and contour consistency quantitatively validate the system’s capacity to reconstruct remote targets through reflective tomography, achieving centimeter-scale fidelity under real-world atmospheric turbulence conditions.
Figure 5.(a) Projection data of WordArt target. Projections were taken at 1° intervals from 0° (one side forward) to 180° (another side forward). (b) Image reconstructed by the FBP algorithm with measured data. (c) Typical feature dimensions of the target.
This study employs the edge-based modulation transfer function (MTF) analysis method[32] to quantify the system’s spatial resolution by characterizing its response to different spatial frequencies. The theoretical foundation of this approach is derived from linear system theory, expressed as where the line spread function denotes the line spread function, and represents the Fourier transform.
The implementation begins with selecting regions of interest (ROIs) containing sharp edge features from the reconstructed image. To precisely locate the edge positions, the centroid method is applied row-wise: Here, represents the grayscale value of the pixel at the th row and the th column. A least-squares linear fit is then performed to determine the edge orientation, compensating for potential tilt during acquisition.
Using 4-fold oversampling, the edge spread function (ESF) is constructed, describing the system’s response to an ideal step input. LSF is obtained by differentiating the ESF:
To mitigate spectral leakage, a Hamming window is applied to the LSF: where is the length of LSF. The MTF curve is derived by normalizing the magnitude of the Fourier-transformed windowed LSF. The limiting resolution, defined as the spatial frequency at which the MTF drops to 0.1, quantifies the system’s ability to resolve fine details[33].
MTF curves from multiple ROIs are aligned in the frequency domain as shown in Fig. 6. The resolution values (originally in cycles/pixel) were converted to physical dimensions, yielding 1.68, 1.16, 1.44, 1.52, 0.69, 1.52, 1.39, 1.48, and 0.67 cm for the four ROIs, respectively, with a final average resolution of 1.28 cm.
Figure 6.(a) Image reconstructed by the FBP algorithm with ROIs. (b) MTF curves of ROIs.
As discussed above, the image resolution of the receiving telescope with a 1064 nm laser in the diffraction limit is about 5 cm in theory. The reconstructed image resolution with tomography LiDAR is better than 2 cm and is 2.5 times the resolution in the optical diffraction limit. The result shows that the proposed LiDAR system is capable of imaging at centimeter-scale over 10 km and demonstrates the capability of super-resolution imaging.
3.3. Reconstruction of Sparsely Sampled Data
In long-range target detection, sparse projection data may arise due to equipment limitations or non-cooperative target motion. In this study, we simulate such conditions by subsampling a complete set of angular projections (Fig. 7), while retaining knowledge of the sampling angles. It is an assumption that simplifies reconstruction but may not hold in fully unconstrained conditions. The reconstruction results using different algorithms are shown in Fig. 8.
Figure 7.(a) Equally sampled projection data with an interval of 5°. (b) Randomly sampled projection data with a sampling rate the same as the 5° uniform sampling. (c) Equally sampled projection data with an interval of 10°. (b) Randomly sampled projection data with a sampling rate the same as the 10° uniform sampling.
To quantitatively assess the performance of different reconstruction algorithms, four metrics including the information entropy (IE)[31,34], correlation coefficient (CC)[29], structural similarity (SSIM)[35], and peak signal-to-noise ratio (PSNR)[36] are utilized. Figure 9 shows the evaluation of the reconstruction performance of full and sampled data using different algorithms.
Figure 9.Evaluation of reconstruction performance using the metric (a) IE, (b) CC, (c) PSNR, and (d) SSIM. The reference image of the evaluation is a detailed and sharp image of the model.
IE quantifies the uncertainty or information content in reconstructed images. For the IE metric, the NLM consistently demonstrates the highest values across all sampling scenarios, indicating that it retains the most information and detail in the reconstructed images. In contrast, the ART shows the lowest IE values, suggesting that it may over-smooth the images and lose important details. When comparing complete and sparse data, all algorithms exhibit a decrease in IE with sparse sampling, reflecting the loss of information due to reduced data volume. It should be noted that, despite achieving consistently high IE scores across different sparse sampling schemes, FBP reconstructions display prominent artifacts in sparsely sampled cases. This suggests that part of the perceived information may stem from artifact patterns rather than structural recovery.
CC measures linear similarity to the reference image. NLM outperforms the other algorithms with the highest CC values, showing a strong linear relationship and thus a closer match to the original data. FBP follows, while ART lags significantly, particularly in sparse sampling cases. The CC values of all algorithms decline as the data becomes sparser, highlighting the challenge of accurately reconstructing images with limited data.
PSNR metrics emphasize pixel-wise fidelity and noise suppression. Here, the NLM again leads with the highest PSNR values, suggesting that it generates reconstructed images with minimal noise and higher fidelity. FBP also performs relatively well, but the ART has lower PSNR values, implying more noise and lower quality in its reconstructed images. Under sparse sampling, the PSNR values of all algorithms decrease, but the NLM maintains a relatively higher level, demonstrating its robustness in handling sparse data.
The SSIM evaluates the structural similarity between the original and reconstructed images. FBP experiences a significant decline in SSIM values, highlighting FBP’s vulnerability to data sparsity, as it struggles to maintain the structural integrity of the original images. In contrast, NLM demonstrates remarkable stability. Its SSIM values decrease only marginally. This minimal decline underscores NLM’s ability to preserve the structural information effectively even with sparse data, outperforming FBP significantly in this regard. Meanwhile, the ART performs poorly, especially in sparse sampling scenarios, with its SSIM values remaining at extremely low levels across different sampling conditions.
In summary, the NLM emerges as the most effective algorithm, demonstrating superior performance across all metrics and data conditions. It excels in retaining information, maintaining linear relationships, minimizing noise, and preserving structural integrity, both in complete and sparse data scenarios. FBP offers a decent alternative, particularly in handling complete data, while the ART struggles significantly under sparse sampling.
It should be noted that our current reconstruction framework assumes known angular sampling positions, which may not be available for completely non-cooperative targets. In practical applications, joint estimation of the target pose and reflectivity distribution would be necessary. Future work should explore adaptive reconstruction methods that relax angular prior requirements while maintaining HR.
4. Conclusion and Future Work
In conclusion, this study demonstrates HR tomographic imaging at a standoff distance of 10 km under real-world atmospheric conditions, achieving 2 cm resolution in field experiments. To the best of our knowledge, these results represent the first experimental validation of centimeter-scale measurements over such extended near-ground distances. The success was attained despite significant challenges from atmospheric attenuation, turbulence-induced waveform distortions, and the intricate geometry of targets. By combining an optimized RTL system architecture with the advanced reconstruction algorithm, we have demonstrated the capability of the proposed system and effectively bridged the gap between laboratory-scale precision and long-range field-deployable LiDAR performance. Through analysis of the sparse data reconstruction results, we confirm that the NLM reconstruction algorithm has advantages in preserving shape integrity and suppressing artifacts.
Future efforts will focus on three key directions to advance both resolution and operational robustness. First, enhancing the receiver bandwidth is expected to further improve resolution by resolving finer temporal features in echo signals. Second, extending the detection range to 100 km will require maximizing laser pulse energy, combined with advanced photon-efficient signal processing. Integrating single-photon detection technology into the RTL framework offers a promising pathway toward achieving centimeter-scale resolution at megameter-scale distances. Therefore, in planned future research, we hope to obtain the echo waveform by single-photon technology and acquire a super-resolution tomographic image based on RTL from over 100 km with centimeter-scale resolution. Beyond hardware improvements and a new detection method, future efforts will also investigate algorithmic enhancements to relax the requirement for known angular sampling. This includes developing joint reconstruction frameworks that simultaneously estimate target pose and reflectivity distribution. Such advancements would extend the applicability of RTL to fully non-cooperative targets in unconstrained rotational states, which is a crucial capability for real-world orbital debris monitoring and adversarial target recognition.
Acknowledgments
Acknowledgment. This work was supported by the National Natural Science Foundation of China (No. 61871389), the Scientific Research Project of National University of Defense Technology (Nos. 24-ZZCX-JDZ-43 and 22-ZZCX-07), and the Youth Independent Innovation Science Foundation Project of National University of Defense Technology (No. ZK23-45).