The quest for larger aperture telescopes with high angular resolution is driven by numerous scientific objectives, such as astrophysics and remote sensing. Optical synthetic aperture (OSA) provides a feasible solution to form a large-aperture telescope by combining coherently the light coming from several small apertures. Here, we demonstrate a scattering-assisted coherent diffraction imaging (CDI) approach to realize OSA. In our approach, we collect the diffraction pattern of the targeting object by incorporating a relatively large scattering layer in front of the aperture lens. Light scattering, traditionally considered a hindrance, is now exploited to efficiently capture the object’s high-frequency spatial information. Experimentally, we achieve single-shot full-field imaging ranging from the Fraunhofer regime to the Fresnel regime with a spatial resolution of 1.74 line pairs/mm over 40 m. This is equivalent to synthesizing a 5.53 cm aperture telescope using only a 0.86 cm aperture lens, achieving a resolution enhancement by about 6.4 times over the diffraction limit of the receiving aperture. Our approach offers a new pathway for OSA and scattering-assisted optical telescopes.
【AIGC One Sentence Reading】:We present a scattering-assisted CDI method for OSA, using a large scattering layer to capture high-frequency spatial info. Single-shot full-field imaging is achieved, enhancing resolution 6.4x over the diffraction limit.
【AIGC Short Abstract】:Driven by scientific goals like astrophysics, the pursuit of large-aperture telescopes with high angular resolution is ongoing. Optical synthetic aperture (OSA) offers a solution by coherently combining light from small apertures. This study presents a scattering-assisted coherent diffraction imaging method for OSA. By adding a scattering layer, high-frequency spatial info is captured. Single-shot full-field imaging with 1.74 lp/mm resolution over 40 m is achieved, enhancing resolution 6.4 times.
Note: This section is automatically generated by AI . The website and platform operators shall not be liable for any commercial or legal consequences arising from your use of AI generated content on this website. Please be aware of this.
Far-field high-resolution observations are essential across numerous scientific and engineering applications, such as astronomical imaging[1], remote sensing[2], and environmental monitoring. However, the pursuit of extraordinary spatial resolution through conventional large telescopes faces significant challenges: prohibitive costs of large primary mirrors, manufacturing constraints, and deployment difficulties[3]. Synthetic aperture imaging has emerged as a revolutionary alternative in both radio and optical regimes, enabling multiple independent apertures to function collectively as a virtual large aperture to achieve superior spatial resolution.
In the optical regime, optical synthetic aperture (OSA) overcomes the diffraction limit of finite-size apertures through interferometric or non-interferometric measurements, with the key distinction being whether phase synchronization is required[4–6]. Beyond the Fizeau and Michelson-type submirror approaches, significant advances in passive stellar interferometry include the international collaboration at the Very Large Telescope Interferometer (VLTI) (effective aperture )[7] and the Center for High Angular Resolution Astronomy (CHARA) (spanning 300 m with six 1-m telescopes)[8]. The forthcoming air Cherenkov telescope arrays, which consist of almost 100 telescopes, represent the current largest optical intensity interference project[9,10].
Active OSA integrates the advantages of active laser illumination and OSA, enabling the decoupling of imaging resolution from the signal-to-noise ratio (SNR)[11]. Examples include OSA and inverse OSA Lidar[12,13], Fourier transform telescopes[14], active optical intensity interferometry[15,16], ghost OSA Lidar systems[17], and so forth[18]. While these methods achieve impressive resolution, most of them sacrifice the field of view (FoV) for resolution. They typically require relative motion between the imaging system and the target, the deployment of specially designed transmitter-receiver arrays, and multi-frame illumination. Fourier ptychographic OSA imaging shows promise for simultaneously achieving large FoV and high resolution by coherently stitching multiple low-resolution images in the Fourier domain[19,20], showing effectiveness in microscopy. However, for far-field imaging[21–24], mechanical movement still limits its application, especially for dynamic scenes.
An alternative approach to achieving high-resolution imaging emerged through coherent diffraction imaging (CDI)[25]. CDI eliminates the imaging lens by computationally recovering phase information from diffraction patterns, enabling unprecedented resolution in X-ray[26,27], electron[28], and optical regimes[29]. The easy implementation and single-shot capture capability offer significant advantages for observing dynamic processes without complex mechanical scanning systems. Despite these advancements[30], CDI techniques have primarily focused on microscopic structures at nanometer to micrometer scales, with fewer applications to macroscopic targets in remote sensing or even astronomical observation.
To overcome these limitations, recent advances have powerfully demonstrated that scattering media, traditionally considered an obstacle, can be formulated as a computational lens[31–35] and even achieve super-resolution imaging[36–39]. Inspired by these principles, here we propose a scattering-assisted CDI approach for realizing OSA. By leveraging uncharacterized scattering media as effective varifocal mirrors to capture the object’s high-frequency spatial information, our method overcomes the conventional aperture’s diffraction limit. Different from previous approaches, we employ the fractional Fourier transform (FrFT)[40–42] to provide a unified computational framework that seamlessly bridges the near-field (Fresnel) and far-field (Fraunhofer) diffraction regimes. Experimentally, we demonstrate single-shot, full-field imaging that achieves a 1.74 line pairs/mm resolution over a 40 m distance. This is equivalent to synthesizing a 5.53 cm aperture from a 0.86 cm lens, achieving a resolution enhancement by about 6.4 times over the diffraction limit of the receiving aperture. This approach offers a new way toward high-resolution remote imaging and may inspire computational scattering-assisted telescope designs.
2. Principle
Figure 1(a) illustrates the schematic experiment. The analysis begins by considering a divergent spherical wave, , originating at the source () and propagating to the object plane at , assuming the paraxial propagation condition[43], where is the amplitude of the spherical wave, is the wavenumber, and denotes the coordinates in the object plane. After interacting with the reflective amplitude object , the resulting diffracted light propagates on the diffuser screen at . The wave field at the diffuser plane can be described by the free-space Fresnel diffraction integral[44] as
Figure 1.Imaging setup and flowchart of the scattering-assisted OSA imaging. (a) The experimental schematic. A divergent laser beam propagates a distance to the object plane, interacts with the object, and travels a distance to the diffuser screen. The resulting scattered field is then captured by a camera. (b) Experimental corridor (40 m) with the diffraction area (orange dashed box). (c) Unified computational phase retrieval algorithm using FrFTs (, ) spanning the Fraunhofer to Fresnel regimes. (d) Comparison of detection methods. Direct detection: In a conventional setup, the object’s Fourier spectrum is captured at a focal plane (typically 1 m in front of the lens), with its range being limited by the lens aperture. Scattering-assisted detection: In contrast, by positioning a scattering medium as a virtual lens at this focal plane, the entire diffraction field is encoded and captured in a single shot.
By substituting Eq. (1) into Eq. (2) and assuming the source is centered at (0, 0), the resulting wave field can be reorganized as
We show below how this Fresnel diffraction integral can be expressed in terms of FrFT[41,42,45,46,48]. Recall the two-dimensional FrFT definition as where is the transform kernel, with given as
The most remarkable property of Eq. (3) is that, by changing the variables in the integral to , , , , with , and applying the trigonometric identity , the equation can be represented as For simplicity, all the primes are omitted, since they do not affect the derivation. Equation (6) can be expressed as a straightforward two-dimensional FrFT,
Considering the intensity-only measurement and the normalized amplitude of the wave field (), the complex phase factor of Eq. (7) disappears, and we have
The scattered intensity pattern is captured by a detector. This capture process can be modeled as a discrete sampling process of the intensity distribution. The physical process of wave propagation described by the Fresnel integral allows us to utilize the FrFT as a tool for analyzing and describing a scaled wave field distribution evolution. The fractional order is directly related to the physical propagation parameters through , where represents the effective diffraction distance derived from the Fresnel scaling theorem[47] and is the scale factor typically set to 1 to maintain consistency with the real input field.
For discrete implementation using the eigen decomposition-type discrete FrFT[45], the relationship is modified to where is the object plane sampling number and is the source size. Figure 2(a) illustrates this relationship for our experimental setup, where and . This unified framework elegantly classifies diffraction regimes. The system operates in the Fraunhofer diffraction regime when the dimensionless Fresnel number, , is much smaller than 1. This condition corresponds to a fractional order , as indicated below the blue dashed line in Fig. 2(a). Otherwise, diffraction is in the Fresnel diffraction regime.
Figure 2.Fractional Fourier transform modeling of diffraction from near to far field. (a) Mapping of the fractional order, target size, and propagation distance (). Blue dashed lines delineate the Fraunhofer (below) and Fresnel (above) regimes. Stars indicate experimental measurements from Figs. 3 and 4 (four-point stars) and Fig. 5 (five-point star). (b) and (c) Profiles along gray solid/dashed lines in (a) for various fractional orders. (d1)–(d5) Simulated amplitude diffraction patterns for orders in (b) ( is not shown). (e1)–(e5) Corresponding reconstruction results of (d1)–(d5). (f) and (g) Diffraction patterns and reconstructions corresponding to (c) ( is not shown). Note that the source plane scaling factor is fixed in all simulations. All diffraction amplitude patterns are shown in lg scale and normalized.
Consequently, this unified framework [Fig. 1(c)] provides a continuous parameterization via the fractional order . This enables our algorithm to operate seamlessly across the entire diffraction continuum. As a result, a single-phase retrieval strategy, such as the alternating projection method detailed in Sec. 3.2, can be used to effectively reconstruct targets regardless of their diffraction regime.
3. Results
3.1. Experimental Setup
The experiment was conducted in a 40 m open corridor [Fig. 1(b)] using the setup sketched in Fig. 1(a). A 10 mW fiber laser () was collimated (focal length = 4 mm) to produce a 6.5 cm diameter light spot on the test object, which was placed 40 m away. The wavefront reflected from the object was projected onto the scattering screen (A3 Xuan paper, ). An industrial lens (, F/1.4) with an 8.57 mm effective aperture was used to image the scattering screen onto a monochrome camera sensor (Basler acA5472-5gm, pixel, 12-bit dynamic range, 2.4 μm pixel size) to record the scattered diffraction pattern. The targets were 3D printed hollowed-out binary patterns with mirrors mounted on the back for reflection. The target sizes ranged from 2.8 mm to 5 cm. To fully utilize the dynamic range of the camera, a typical integration time for a single-shot capture ranged from 10 to 200 ms depending on the reflectance of the object.
3.2. Unified Reconstruction Algorithm
While scattered intensities can be measured, the phase information contained in the scattered wave field is lost. As indicated in Eq. (8), the measured intensity is the squared magnitude of the FrFT of the object . Recovering the object, therefore, requires retrieving the lost phase, which can be formulated as a phase retrieval problem[49–51]. The problem can be mathematically stated as where denotes the two-dimensional coordinates.
Prior to the phase retrieval, the raw diffraction patterns were preprocessed to mitigate noise. We selected an FoV of ( pixel) for data acquisition. The effective FoV strikes a balance between capturing high spatial frequency components and maintaining adequate SNR within the camera’s dynamic range. Our preprocessing pipeline consisted of three steps: 1) downsampling the raw patterns to pixel and computing their square root to calculate the Fourier amplitude; 2) applying non-local means (NLM) denoising[52] to suppress noise while preserving edge features; and 3) applying a two-dimensional Hanning window to minimize spectral leakage. The output of this pipeline is the refined fractional Fourier amplitude, . This amplitude serves as the input for the unified phase retrieval algorithm.
To solve this phase retrieval problem, we employ an alternating projection scheme, which relies on projection operators that enforce constraints in two domains: the object domain (via a support set ) and the Fourier domain (via the measured modulus )[53]. The object domain constraint is enforced by the support projection operator :
The modulus projector sets the modulus to the retrieved fractional Fourier magnitude , with being the fractional Fourier domain coordinate, leaving the phase unchanged:
To accelerate convergence, we apply three constraints in the object domain. The first is the real and non-negativity constraint. The second is a sparsity constraint, enforced by an operator , which retains only the pixels corresponding to the top 15% of the object magnitude and sets the remainder to zero. The choice of this threshold value can influence convergence and quality of final reconstruction, with the optimal value being somewhat dependent on object structure and SNR. We found that a threshold between 10% and 20% of maximum reconstructed amplitude provided consistently robust results across all experiments. The third constraint is an adaptive support based on the shrink-wrap support method[54], which iteratively refines the support region . Specifically, the shrink-wrap method periodically convolves with a Gaussian kernel of standard deviation and then thresholds at a value ; i.e., the updated support for the two-dimensional case is defined as
The corresponding constraint in the fractional Fourier domain is the enforcement of the measured modulus, implemented by the projection operator , which ensures consistency with the measured data . For clarity, we describe our modified phase retrieval procedure in Algorithm 1. Accompanied by the iteration, the support region changes from a loose size to a tight one (see Algorithm 1, steps 1–8). The initial , where represents the pixel size related to the imaging sensor.
Input: initial guess , initial support , initial standard deviation of the Gaussian kernel , iteration times , start iteration counter , support region update symbol .
Output: recovered object , estimated support , recovered Fourier phase
1: whiledo
2: ifthen
3:
4: else
5:
6: end if
7:
8:
9:
10:
11:
12:
13: end while
14: return, , .
In terms of implementation, the phase retrieval algorithm is performed with MATLAB 2022b in Windows 11 running on an Intel Core i9-13980HX processor and 32 GB memory. A single run of the program on a pixel image retrieval (composed of 500 iterations) was completed in approximately 4 min.
3.3. Simulation Results
Figure 2 illustrates how the FrFT provides a unified framework for modeling diffraction phenomena across all physical regimes. In traditional optics, diffraction is typically categorized into near-field (Fresnel) and far-field (Fraunhofer) regimes, each requiring different mathematical formalism. The FrFT, however, elegantly unifies these regimes through a single parameter: the transform order . As depicted in Fig. 2(a), the color map plots the required FrFT order as a function of the object’s size and the propagation distance (for a fixed sample size of ). The blue dashed line marks the approximate boundary between the two regimes. Below this line lies the Fraunhofer (far-field) regime, which is characterized by FrFT orders approaching . This condition is met for large propagation distances and small objects. Conversely, the Fresnel regime is characterized by values in the open interval (0, 1), consistent with the Fresnel number descriptions of diffraction phenomena.
Figures 2(b) and 2(c) further clarify these relationships. Figure 2(b) demonstrates that for a fixed target size , the fractional order increases with propagation distance , gradually transitioning from the near field to the far field. Complementarily, Fig. 2(c) shows that, at a fixed propagation distance , reducing the target size also increases , again driving the system toward the Fraunhofer regime. Figures 2(d)–2(g) confirm that our algorithm maintains high reconstruction quality across a wide range of fractional orders (), performing effectively for both binary objects (the star spiral pattern) and more complex grayscale patterns (the cat). However, reconstruction quality degrades as approaches its physical limits 0 and 1. When approaches 0, the FrFT approaches an identity transform. Diffraction is minimal; thus, the object’s phase information is insufficiently encoded into the measured intensity pattern. This provides a weak FrFT domain constraint for the algorithm, leading to error accumulation during iteration. Conversely, as approaches 1, the diffraction pattern approaches that of the standard Fourier transform. For a real-valued object, the resulting centrosymmetric intensity pattern gives rise to the well-known “twin-image” ambiguity[51], which fundamentally prevents a unique reconstruction. Despite these physical constraints at parameter extremes, the unified FrFT framework eliminates the need for regime-specific approaches, offering a more elegant and comprehensive framework for diffraction modeling and object reconstruction across a wide range of imaging distances and object scales.
3.4. Experimental Results
We first demonstrate the effectiveness of the proposed framework by imaging in the Fresnel regime. To provide a quantitative assessment of reconstruction fidelity, we employ the structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) as metrics for evaluating the quality of the reconstructed images. Figure 3 presents our primary experimental results for a fixed propagation distance of [Figs. 1(b)] and several target sizes, which correspond to different FrFT orders. The recorded diffraction patterns for 2 in (1 in.=2.54 cm). targets correspond to an FrFT order of [calculated from Eq. (9)] as shown in Figs. 3(a1) and 3(a2). These intensity patterns, displayed on a scale, exhibit the detailed interference fringes characteristic of Fresnel diffraction. The proposed phase retrieval successfully reconstructs both the snowflake pattern [Fig. 3(c1)] and the resolution target [Fig. 3(c2)]. A comparison with the ground truth images [Figs. 3(b1) and 3(b2)] reveals remarkable detail preservation (PSNR: 18.79 dB, SSIM: 0.93; and PSNR: 17.17 dB, SSIM: 0.89, respectively).
Figure 3.Main experimental results. Experimental results for imaging at a fixed distance of . (a1) and (a2) Single-shot diffraction patterns recorded from two-inch targets (highlighted within the orange circles). (b1) and (b2) The corresponding ground truth targets. (c1) and (c2) Phase retrieval reconstruction results. (d) Direct imaging result of the resolution target (e) at a 40 m distance. The inset is the photograph of the target. (e) The 3D printed resolution target used in the experiment. (f) Diffraction pattern of the three-spiral target shown in the first column of (g). (g) 1 in. target images and their corresponding reconstructions. (h) A magnified region from (c2) and its corresponding intensity profiles along the colored lines. (i) Resolution measurement from the target showing that 1.15 mm corresponds to two line pairs, indicating a spatial resolution of 1.74 line pairs/mm. All diffraction patterns are displayed on a normalized lg scale. Scale bars in (a1), (a2), and (f) are 1 cm; in (b1)–(c2) and (e) are 5 mm; in (d) is 5 cm.
Figure 3(h) shows a magnified region from Fig. 3(c2). Figure 3(i) indicates that 1.15 mm corresponds to two line pairs, resulting in a spatial resolution of 1.74 line pairs/mm at 40 m. This performance is equivalent to the theoretical resolving power of a telescope with a diameter of 5.53 cm, calculated using the Rayleigh criterion as . Such performance represents a 6.4-fold resolution enhancement over the physical diffraction limit of the imaging aperture (0.86 cm). A direct comparison with a conventional imaging system is impractical. Matching our FoV with the same 0.86 cm aperture would require a lens with a focal length of , which is impractical to construct. For comparison, Fig. 3(d) shows a direct image taken with the lens focused on a target at a distance of 40 m. The actual target is shown in Fig. 3(e).
We note that, while the high-contrast line patterns [Fig. 3(c2)] are clear, the adjacent numerical digits appear significantly attenuated in the reconstruction. This is an expected consequence of the low reflected light flux from these specific features and the resulting SNR at the detector. Although our diffraction-based approach cannot match the resolution and contrast of direct imaging, it offers two key advantages. First, the proposed system provides a feasible solution for OSA imaging. Second, our approach effectively functions as a variable focal length telescope, enabling flexible imaging across a continuum of distances without requiring any physical changes to the optical hardware.
Additional results include the diffraction pattern of two 1-in. targets [Figs. 3(f) and 3(g)]. For these objects, the experimental geometry corresponds to an FrFT order of , demonstrating the versatility of our approach across different target geometries. Figure 4 presents experimental results across different diffraction regimes by maintaining a fixed object size (1 in.) with varying propagation distances (, 30, and 20 m). The experimental design systematically explores the transition from near-Fraunhofer () to deeper Fresnel regimes (). While the system’s angular resolution is fixed by the virtual aperture size, the achievable spatial resolution depends on the propagation distance. This is because the object’s diffraction pattern expands as increases, causing the fixed-size sensor to capture a smaller fraction of the high-frequency information, thus lowering the final spatial resolution.
Figure 4.Imaging results at varying propagation distances. Diffraction patterns and reconstructions for a 1 in. target (within blue circles) at , 30, and 20 m. (a1)–(a3) Diffraction amplitude patterns (lg scale). (b1)–(b3) Corresponding reconstruction results. Scale bars represent 1 cm in (a) and 5 mm in (b).
Figure 5 showcases our system’s performance in the Fraunhofer diffraction regime, where the FrFT order approaches 1. For this experiment, the propagation distance was fixed at while the target size was reduced to approximately . The single-shot diffraction patterns [Figs. 5(a1)–5(a4)] display the centrosymmetric characteristic of far-field diffraction features. The corresponding ground truth images and phase retrieval reconstructions are presented in Figs. 5(b) and 5(c), respectively. Despite the fundamental “twin-image” ambiguity of Fraunhofer diffraction, our approach successfully recovers detailed target features with high fidelity, demonstrating the effectiveness of our unified phase retrieval framework even in this challenging regime.
Figure 5.Fraunhofer diffraction imaging and reconstruction. Imaging results at for target size of approximately . (a1)–(a4) Diffraction amplitude patterns for different targets (lg scale). (b) Ground truth targets. (c) Corresponding reconstruction results. Scale bars represent 1 cm in (a) and 1 mm in (b) and (c).
Taken together, these results confirm that our unified approach is robust across a wide range of diffraction regimes, maintaining consistent reconstruction quality despite substantially different diffraction physics.
4. Discussion
Our method inherits the core concept of OSA—synthesizing a larger virtual aperture to surpass the diffraction limit—yet it differs fundamentally in its implementation. Traditional OSA relies on the precise deployment and strict phase synchronization of multiple sub-apertures, resulting in complex systems that typically require multi-frame acquisition. In contrast, our approach utilizes a single scattering medium as a random modulating element to capture high-frequency information equivalent to a large aperture in a single shot. This “computational” aperture synthesis strategy drastically simplifies the hardware and enables single-frame imaging.
To quantitatively evaluate the reconstruction performance, we used the mean squared error (MSE) and the PSNR as the metrics, averaging the results over 10 individual phase retrieval trials. Figures 6(a) and 6(b) demonstrate that intermediate FrFT orders (, 0.79) consistently achieve both higher PSNR and stable convergence compared to the conventional far-field case (). This improvement can be attributed to the quadratic phase term in Fresnel diffraction, which produces non-centrosymmetric diffraction patterns that are beneficial for phase retrieval. In contrast, for the case, the standard Fourier transform yields a centrosymmetric diffraction pattern that suffers from the intrinsic “twin-image” ambiguity [the MSE oscillations are indicated by the red arrow in Fig. 6(b)]. The simulation results in Figs. 6(c1) and 6(c2) further validate our approach’s robustness across varying noise conditions. The input images were initially corrupted with 1%, 5%, and 10% Gaussian noise, after which Poisson noise was simulated to mimic the characteristics of detectors. The results show that peak performance is consistently achieved at intermediate values, with a notable decline as approaches the far-field limit. This performance curve remains consistent as noise levels increase from 1% to 10%, suggesting that our method maintains stability in challenging imaging environments.
Figure 6.Comparison of convergence speed and reconstruction quality with varying fractional orders . (a) Experimental PSNR and MSE curves for 10 reconstructions with random initialization at [Fig. 3(a1)] and [Fig. 3(f)]. Blue and orange lines show mean values with standard deviation error bars. (b) Similar curves for [Fig. 5(a4)]. Red arrows in (a) and (b) mark the MSE oscillation onset, indicating algorithm convergence after several iterations. (c1) and (c2) Simulated PSNR and MSE versus under mixed noise conditions (Poisson noise followed by 1%, 5%, and 10% Gaussian noise) using the target from Fig. 2(g).
We also conducted experiments using three different scattering media: Xuan paper, A3 paper, and an acrylic scatterer, as depicted in Figs. 7(a1)–7(a3). The results, shown in Figs. 7(c1)–7(c3), demonstrate successful image reconstruction across all tested conditions. Notably, Xuan paper yielded the highest resolution, achieving a PSNR of 17.17 dB. The acrylic scatterer, however, significantly lowered the reconstruction quality. This is due to the introduction of speckle artifacts in the object’s Fourier domain, which compromised the fidelity of the final image.
Figure 7.Imaging results with different scattering masks. (a1)–(a3) Photography of the three scattering media: Xuan paper, A3 paper, and an acrylic scatterer. The target [Fig. 3(e)] was positioned 5 mm beneath the scattering medium. (b1)–(b3) Corresponding raw diffraction patterns with the imaging distance of 40 m. (c1)–(c3) Reconstructed images from the diffraction patterns in (b1)–(b3). Scale bars represent 1 cm in (b) and 5 mm in (c).
Despite these promising results, our approach has several limitations. First, the performance degrades with highly scattering targets. In such cases, the resulting diffraction patterns become dominated by speckle, which significantly reduces the photon utilization efficiency and complicates reconstruction. The second constraint arises from light absorption by the scattering screen, which reduces overall photon efficiency, particularly in low-light conditions. The limited dynamic range of standard detectors also poses challenges when simultaneously capturing the high-intensity central maxima and the faint, high-frequency features within a single diffraction pattern. In our computational approach, the space-bandwidth product (SBP) of the reconstructed image is limited by the camera and the downsampling operation. With the downsampled camera’s pixels capturing a area, the retrievable object size . Although the theoretical SBP is , experimental noise and phase retrieval algorithms reduce the final effective resolution. Specifically, we achieved 1.74 line pairs/mm experimentally, compared to a theoretical limit of 3.94 line pairs/mm [calculated as 5.08 cm/400 pixel (downsampled from 1600 camera pixels)]. Consequently, our experimental SBP is . This lower value arises from the fixed FoV setup capturing a target smaller than the maximum retrievable area, leading to underutilization of the available SBP. From the algorithmic perspective, currently, we determine the fractional order by manually judging the reconstruction quality, with minor variations () having a negligible impact. The empirically determined optimal value is in close agreement with the one calculated from the theoretical formula Eq. (7). However, to achieve full automation, a deep learning model could be trained to directly predict the optimal value from the input data, thus eliminating the need for manual tuning.
5. Conclusion
In summary, we have developed a unified phase retrieval framework bridging Fresnel and Fraunhofer regimes and experimentally demonstrated single-shot full-field imaging with exceptional resolution (1.74 line pairs/mm at 40 m) using only a small 0.86 cm aperture lens. The approach effectively synthesizes a 5.53 cm virtual aperture telescope without the need for complex optical alignments. Despite some limitations when imaging highly scattering targets, constraints from the detector dynamic range, and scattering screen absorption, our method offers significant advantages in cost and complexity over large-aperture systems. Future explorations include rough surface reconstruction via multiple speckle diversity[55,56], high dynamic range acquisition using multi-exposure fusion[57], and application to more challenging imaging scenarios (such as robust recovery of fully complex-valued targets) through deep learning approaches. Furthermore, relaxing the coherence requirement by extending our framework to broadband illumination, inspired by ingenious pioneering works[58,59], also presents a key avenue toward more robust and practical real-world scenarios. This framework establishes a foundation for novel imaging systems that can operate seamlessly across different diffraction regimes, with potential applications in remote sensing, astronomy, and other long-distance imaging scenarios.
Acknowledgments
Acknowledgment. We thank Wenwen Li, Xin Huang, and Yu Hong for helpful discussion and assistance. This work was supported by the Innovation Program for Quantum Science and Technology (No. 2021ZD0300300), the National Natural Science Foundation of China (No. 62031024), the Shanghai Rising-Star Program (No. 24YF2751300), the Anhui Initiative in Quantum Information Technologies, Shanghai Municipal Science and Technology Major Project (No. 2019SHZDZX01), the Shanghai Science and Technology Development Funds (No. 22JC1402900), the Shanghai Academic/Technology Research Leader (No. 21XD1403800), the Chinese Academy of Sciences, and the New Cornerstone Science Foundation through the Xplorer Prize.
[1] H. M. Boffin et al. Astronomy at High Angular Resolution: A Compendium of Techniques in the Visible and Near-Infrared, 439(2016).