Advanced Photonics Nexus, Volume. 4, Issue 4, 046014(2025)

Retained imaging quality with reduced manufacturing precision: leveraging computational optics

Yujie Xing1,2,3, Xiong Dun1,2,3、*, Dinghao Yang1,2,3, Siyu Dong1,2,3, Yifan Peng4, Xuquan Wang1,2,3、*, Jun Yu1,2,3、*, Zhanshan Wang1,2,3,5, and Xinbin Cheng1,2,3,5
Author Affiliations
  • 1Tongji University, Institute of Precision Optical Engineering, School of Physics Science and Engineering, Shanghai, China
  • 2MOE Key Laboratory of Advanced Micro-Structured Materials, Shanghai, China
  • 3Shanghai Frontiers Science Center of Digital Optics, Shanghai, China
  • 4The University of Hong Kong, Department of Electrical and Electronic Engineering, Hong Kong, China
  • 5Tongji University, Shanghai Institute of Intelligent Science and Technology, Shanghai, China
  • show less

    Manufacturing-robust imaging systems leveraging computational optics hold immense potential for easing manufacturing constraints and enabling the development of cost-effective, high-quality imaging solutions. However, conventional approaches, which typically rely on data-driven neural networks to correct optical aberrations caused by manufacturing errors, are constrained by the lack of effective tolerance analysis methods for quantitatively evaluating manufacturing error boundaries. This limitation is crucial for further relaxing manufacturing constraints and providing practical guidance for fabrication. We propose a physics-informed design paradigm for manufacturing-robust imaging systems with computational optics, integrating a physics-informed tolerance analysis methodology for evaluating manufacturing error boundaries and a physics-informed neural network for image reconstruction. With this approach, we achieve a manufacturing-robust imaging system based on an off-axis three-mirror freeform all-aluminum design, delivering a modulation transfer function exceeding 0.34 at the Nyquist frequency (72 lp / mm) in simulation. Notably, this system requires a manufacturing precision of only 0.5λ in root mean square (RMS), representing a remarkable 25-fold relaxation compared with the conventional requirement of 0.02λ in RMS. Experimental validation further confirmed that the manufacturing-robust imaging system maintains excellent performance in diverse indoor and outdoor environments. Our proposed method paves the way for achieving high-quality imaging without the necessity of high manufacturing precision, enabling practical solutions that are more cost-effective and time-efficient.

    Keywords

    1 Introduction

    The demand for low-cost, high-quality imaging systems has been steadily rising across various fields, such as aviation and aerospace,14 precision guidance,57 and security monitoring.8,9 Achieving high-quality imaging typically requires complex optical designs to correct aberrations.1012 Notably, equally important as design is the precision in the manufacturing of optical components, often requiring nanoscale accuracy.1316 These requirements significantly increase complexity and costs, making it challenging to deliver cost-effective solutions. Consequently, the development of manufacturing-robust imaging systems has become a prominent research focus, with efforts encompassing design methodology,17,18 hardware compensation strategy,19,20 and computational optics strategy.21,22

    The design methodology employs optimization strategies to control manufacturing error sensitivity during the optical design phase, facilitating the development of systems with reduced sensitivity to manufacturing errors through iterative optimization. Based on this insight, various optimization strategies have been extensively studied, including the use of linear combinations of basis functions to model manufacturing errors17,23 and the control of incident ray angles.18 Although this methodology has shown advantages in reducing sensitivity to manufacturing errors, its improvements remain limited. For instance, Deng et al.18 still required stringent manufacturing tolerances of 0.049λ in root mean square (RMS) for an off-axis three-mirror freeform system, even after optimization.

    To further reduce sensitivity to manufacturing errors, hardware compensation strategies have been widely researched, involving the use of specific optical elements within the system20,24 or the introduction of additional optical components19,25 as aberration compensators to correct residual aberrations caused by manufacturing errors in other optical elements. As hardware compensators offer greater design flexibility, manufacturing tolerances for other optical components can be relaxed to 0.3λ20 to 0.4λ26 in RMS, resulting in an improvement of more than an order of magnitude. However, the fabrication of compensators still requires extremely high precision, such as 0.01λ in Ref. 25, posing a significant challenge to manufacturing.

    Recent advancements in computational optics have demonstrated a strong capability in aberration correction,2731 enabling the replacement of compensators in hardware compensation strategies to address their challenges associated with high-precision manufacturing. This approach was pioneered by Chen et al.21,22 in the mass production of mobile terminals. They developed a series of post-processing algorithms based on data-driven neural networks, including field-of-view shared kernel prediction network (FOV_KPN),32 dilated omni-dimensional dynamic convolution (DOConv),21 and prior quantization model-based DOConv,22 effectively mitigating image quality degradation caused by manufacturing errors. However, the lack of a comprehensive understanding of manufacturing error boundaries in computational optics has constrained Chen’s research from further relaxing manufacturing requirements for mobile terminals. A major obstacle in exploring these error boundaries is the absence of efficient tolerance analysis methods. Existing data-driven neural network requires considerable time for each training session to achieve optimal image quality. For example, training such a neural network typically takes nearly 5 h, and completing a single iteration with 1000 Monte Carlo random samples would require 208 days, rendering this time cost impractical for analyzing manufacturing error boundaries. Consequently, the exploration of manufacturing error boundaries in computational optics systems remains a significant challenge.

    In this work, we seek to overcome the challenge of exploring manufacturing error boundaries in computational optics and further relax manufacturing requirements to realize imaging systems with robustness to manufacturing errors. Specifically, we propose a physics-informed design paradigm that combines a physics-informed tolerance analysis methodology with a physics-informed neural network-based post-processing algorithm. The tolerance analysis leverages the Wiener filter3336 as a physics prior to reconstruct image details, drastically reducing computational demands for several orders of magnitude and enabling efficient evaluation of manufacturing error boundaries via Monte Carlo sampling. The post-processing algorithm integrates a learned Wiener filter and an FOV_KPN neural network, called a learned Wiener-based FOV_KPN neural network. In this framework, the learned Wiener filter restores image details, whereas the neural network removes spatially variant noise and artifacts introduced by the learned Wiener filter, enabling the effective recovery of high-frequency information. In addition, we evaluate the proposed approach through simulations and an experimental prototype of an off-axis three-mirror freeform all-aluminum design, demonstrating that our method achieves a 25-fold reduction in manufacturing precision while maintaining excellent performance, with modulation transfer functions (MTFs) exceeding 0.44 at 72  lp/mm.

    2 Methods

    The proposed physics-informed design paradigm for manufacturing robust imaging system with computational optics comprises two key components: an optical system and a reconstruction model, as illustrated in Fig. 1. The optical system utilizes tolerance analysis to determine the manufacturing error boundaries, whereas the reconstruction model leverages a physics-informed neural network to reconstruct images from sensor measurements. In this section, we present the physics-informed tolerance analysis for computational optics and the learned Wiener-based FOV_KPN neural network, which together facilitate the development of a manufacturing-robust imaging system leveraging computational optics.

    Framework of proposed physics-informed design paradigm for manufacturing-robust imaging system with computational optics. First, a physics-informed tolerance analysis for computational optics is employed to determine the manufacturing error boundaries. Subsequently, the learned Wiener-based FOV_KPN neural network is introduced to effectively reconstruct images degraded by manufacturing errors.

    Figure 1.Framework of proposed physics-informed design paradigm for manufacturing-robust imaging system with computational optics. First, a physics-informed tolerance analysis for computational optics is employed to determine the manufacturing error boundaries. Subsequently, the learned Wiener-based FOV_KPN neural network is introduced to effectively reconstruct images degraded by manufacturing errors.

    2.1 Physics-Informed Tolerance Analysis for Computational Optics

    Notably, introducing computational optics cannot infinitely reduce manufacturing requirements. Therefore, akin to traditional optical design, it is imperative to establish a tolerance analysis methodology tailored for computational imaging systems. Unlike conventional tolerance analysis, this methodology must incorporate reconstruction algorithms and use the final imaging quality as the evaluation criterion. However, reconstruction algorithms such as data-driven neural networks require enormous computational resources, making them impractical for analyzing manufacturing error boundaries. Inspired by Ref. 36, we proposed a physics-informed tolerance analysis methodology for computational optics. This methodology employs the Wiener filter as a physics prior to reconstruct image details and uses MTFs as the image quality evaluation criterion. To effectively model potential manufacturing errors, we have also developed a general manufacturing error model. The detailed procedure is illustrated in Fig. 2(a), encompassing four primary components: assembly tolerance analysis, manufacturing error generation, image rendering, and image reconstruction and evaluation.

    Details of the physics-informed tolerance analysis for computational optics. (a) Flowchart. (b) Assembly tolerance analysis process, including setup of assembly tolerances, performing Monte Carlo raytracing, and identifying the worst-performing case. (c) Manufacturing error generation process, including building the manufacturing error model, determining parameter boundaries, and generating manufacturing errors. (d) Image rendering process, including generating a degraded image through the convolution of the target image with the system PSF, derived from the system wavefront via ray-tracing. (e) Image reconstruction and evaluation process, including applying the Wiener filter to restore degraded images, followed by performance evaluation using MTF calculation.

    Figure 2.Details of the physics-informed tolerance analysis for computational optics. (a) Flowchart. (b) Assembly tolerance analysis process, including setup of assembly tolerances, performing Monte Carlo raytracing, and identifying the worst-performing case. (c) Manufacturing error generation process, including building the manufacturing error model, determining parameter boundaries, and generating manufacturing errors. (d) Image rendering process, including generating a degraded image through the convolution of the target image with the system PSF, derived from the system wavefront via ray-tracing. (e) Image reconstruction and evaluation process, including applying the Wiener filter to restore degraded images, followed by performance evaluation using MTF calculation.

    2.1.1 Assembly tolerance analysis

    Figure 2(b) illustrates the process of assembly tolerance analysis. Using current precision assembly technologies, assembly tolerance parameters such as tilt, decenter, and thickness are defined within the optical design software. Subsequently, the Monte Carlo ray-tracing method is employed to perform the assembly tolerance analysis and identify the worst-performing case based on the system’s MTF criteria. This worst-performing case will serve as the foundational design for subsequent manufacturing tolerance analysis.

    2.1.2 Manufacturing error generation

    Figure 2(c) illustrates the process of manufacturing error generation. The manufacturing error Δz(x,y) is constructed by incorporating an irregularity component Δzirr(x,y) and a periodic rotational symmetric component Δzrot(x,y) as Δz(x,y)=αΔzirr(x,y)+(1α)Δzrot(x,y),where the weight α is adjusted to simulate manufacturing error caused by different manufacturing technologies.

    The irregularity component Δzirr(x,y) is represented by an xy-extended polynomial,37 as shown below Δzirr(x,y)=i=1NAiEi(x,y),and the periodic rotational symmetric component Δzrot(x,y) is represented by a Fourier series expansion polynomial, as shown below: Δzrot(x,y)=i=1MBicos(iω0x2+y2),where Ai is the coefficient of the ith xy-extended polynomial term Ei(x,y), Bi and ω0 are the amplitude and frequency coefficients, respectively, in the Fourier series expansion polynomial, and N and M are the numbers of polynomial coefficients in the xy-extended polynomial and the Fourier series expansion polynomial.

    Typical manufacturing error targets are often provided in the form of RMS, such as Err. Following the state-of-the-art works,38 we can establish the relationship between the boundaries of Ai and Bi, and Err as Ai=αErrN×std[Ei(x,y)],Bi=(1α)ErrM×std[cos(iω0x2+y2)],where std(*) indicates the calculated standard deviation. Refer to section 1 in the Supplementary Material for derivation details.

    Ultimately, the parameters of the xy-extended polynomial and the Fourier series expansion polynomial are randomly generated within the specified ranges defined by Eqs. (4) and (5), given the RMS value of the manufacturing error. If the RMS value of a simulated manufacturing error exceeds 10% of the given RMS value, that particular simulated manufacturing error is discarded, and the process continues until n groups of manufacturing errors are obtained.

    2.1.3 Image rendering

    Figure 2(d) illustrates the process of image rendering. The degraded image g(x,y) is formulated as a convolution of the input image f(x,y) and the point spread function (PSF) of the optical system h(x,y), with the presence of pixel-wise Gaussian-Poisson sensor noise, expressed as g(x,y)=ηp[f(x,y)h(x,y),σp]+ηg[f(x,y)h(x,y),σg],where ηp(*,σp) is the Poisson noise and ηg(*,σg) is the Gaussian noise. The PSF h(x,y) is a function of the optical system’s wavefront aberrations W(s,t), that can be expressed as h(x,y)=|F{A(s,t)ejkW(s,t)}|2,where k=2π/λ is the wave number. The aperture of the optical system is represented by a circ function A(s,t) with the size of D, whereas spatial coordinates at the aperture of the optical system and sensor planes are denoted as (s,t) and (x,y). The wavefront aberrations W(s,t) can be directly obtained through optical design software based on the simulated manufacturing errors generated above.

    2.1.4 Image reconstruction and evaluation

    As shown in Fig. 2(e), the Wiener filter is selected as a physics prior to reconstruct degraded images, the mathematical formulation for the image reconstruction is expressed as I(x,y)=F1{H(u,v)¯G(u,v)|H(u,v)|2+σ},where I(x,y) is the reconstructed image, (u,v) are the frequency-space coordinates, G(u,v) is the Fourier transform of the measurement, H(u,v) is the Fourier transform of the PSF h(x,y), *¯ denotes a complex conjugate, σ is a regularization parameter related to the noise level of measurement, and F1{*} is the inverse Fourier transform.

    The MTF at the Nyquist frequency is selected as the evaluation criterion for imaging quality. Specifically, we use a five-bar Nyquist frequency target, consisting of five pairs of black and white bars, as the input image f(x,y), and then, the MTF can be calculated as MTF=π4SDN1(SDN2+SDN3)/2SDN1+(SDN2+SDN3)/2,where SDN1 is the average signal of the third white bar and SDN2 and SDN3 are the average signals of the black bars around the third white bar, respectively [see Fig. 2(e)].

    2.1.5 Implementation details

    When conducting the tolerance analysis, we proceed in two stages, with each stage following the procedure outlined in Fig. 2(a). In the first stage, each manufacturing error limitation for individual optical elements is analyzed while ensuring that the other optical elements remain ideal without any manufacturing errors. The resulting limitation of manufacturing error for each optical element is recorded as Errlimi, where i is the index of each optical element. Based on the Errlimi obtained in the first stage, the second stage involves analyzing the manufacturing error boundaries for all optical elements simultaneously. Each manufacturing error is scaled by multiplying a factor Ra by its limit boundary Errlimi, as follows Erri=Ra*Errlimi.Then, Ra gradually increases until the recovered MTF fails to meet the specified requirement.

    2.2 Learned Wiener-Based FOV_KPN Neural Network

    To provide the powerful capability of resolving high-frequency details from spatially varying severe degradation caused by manufacturing errors and inspired by the recent success of image restoration deep networks,32,3942 we developed a learned Wiener-based FOV_KPN neural network, as shown in Fig. 3. It consists of two main components: learned Wiener deconvolution and FOV_KPN neural network.

    Architecture of learned Wiener-based FOV_KPN neural network. We integrate learned Wiener deconvolution and FOV_KPN neural network to address spatially varying severe degradation, enabling effective recovery of high-frequency details across all fields and significant suppression of noise and artifacts specific to different fields.

    Figure 3.Architecture of learned Wiener-based FOV_KPN neural network. We integrate learned Wiener deconvolution and FOV_KPN neural network to address spatially varying severe degradation, enabling effective recovery of high-frequency details across all fields and significant suppression of noise and artifacts specific to different fields.

    2.2.1 Learned Wiener deconvolution

    It is reasonable to assume that adjacent PSFs share a high degree of similarity.4244 Therefore, same as Ref. 42, we approximate the spatially nonuniform PSFs as patch-wise uniform ones. Then, we employ a learned Wiener deconvolution for each patch, as described in Eq. (8). The advantages of Wiener deconvolution are detailed in section 2 in the Supplementary Material. During the training process, the regularization parameter σ is iteratively updated to converge toward an optimal value that yields superior image reconstruction performance.

    2.2.2 FOV_KPN neural network

    Wiener deconvolution often yields compromised image quality due to amplified noise and the introduction of artifacts. Moreover, this amplified noise and these artifacts exhibit a strong correlation with the spatially variant patch PSFs, making conventional convolution solutions inefficient in addressing these issues, as they share the same kernels for different patches. To address the spatially variant features, we follow Ref. 32 by introducing the FOV_KPN neural network, an extension of U-Net. This network consists of three essential components: the FOV block, the deformable Resblock, and the KPN block.

    In the FOV block, the pixel coordinate matrices in the X and Y directions are initially convolved to create a spatial attention mask. Subsequently, the deconvolution image from the learned Wiener deconvolution unit undergoes convolution to extract its features. Finally, the image features are modulated using the spatial attention mask and combined to generate the output features. In the deformable Resblock, the convolutional kernel shapes are modified with learned offset maps on the feature maps to incorporate spatially variant information into the output. In the KPN block, dilated convolution with four scales is adopted to ensure complete field coverage for the restoration output. The detailed architecture of the FOV_KPN neural network is presented in section 3 in the Supplementary Material.

    3 Simulation Assessment

    Before conducting experimental measurements, we used synthetic data to validate the effectiveness of our proposed methods for developing an off-axis three-mirror freeform all-aluminum system with robustness to manufacturing errors. Initially, the physics-informed tolerance analysis was employed to demonstrate its effectiveness. Subsequently, the learned Wiener-based FOV_KPN neural network was trained to highlight its superiority.

    3.1 Simulation Configuration

    To validate the proposed methods, we designed an off-axis three-mirror freeform all-aluminum system with a field of view of 7.8  deg×6.5  deg, an F-number of 2.8, and covers spectral bands ranging from 400 to 1050 nm, as shown in Fig. 4(a). In addition, the system employs a CMOS detector (FLIR BFS-U3-51S5M-BD2 operating in Bin 2 mode) with a pixel size of 6.9  μm and a detector array size of 1224  pixel×1024  pixel. After undergoing multiple optimizations, the system for obtaining good image quality is shown in Figs. 4(b) and 4(c). The maximum RMS spot diameters of full fields are only 0.988  μm, which is smaller than 0.33 of a pixel. The MTFs of the system demonstrate exceptional performance, surpassing 0.78 at 72  lp/mm. Detailed optical prescription can be found in section 4.1 in the Supplementary Material.

    Optical layout and performance of the off-axis three-mirror freeform system. We present the optical layout of the freeform system (a), along with performance evaluation including MTFs (b) and spot diagrams (c) in various fields.

    Figure 4.Optical layout and performance of the off-axis three-mirror freeform system. We present the optical layout of the freeform system (a), along with performance evaluation including MTFs (b) and spot diagrams (c) in various fields.

    To train the learned Wiener-based FOV_KPN neural network, we selected 830 ground-truth images from the DIV2K45 dataset, with 800 images used for training and 30 images for testing. Degraded images were generated by performing patch-wise convolution of the selected ground-truth images with simulated PSFs. Further details are provided in section 4.3 in the Supplementary Material.

    We employed the ADAM optimizer to train the learned Wiener-based FOV_KPN neural network. Subsequently, we estimated the parameters of the pixel-wise Gaussian-Poisson noise model following Ref. 46. Specifically, we determined σg=1.1×103 for Gaussian noise and σp=8.8×105 for Poisson noise. A detailed description of the noise testing procedure is provided in section 4.5 in the Supplementary Material. In addition, we used a composite loss function consisting of an l2 loss (weight λ1=1) and a perceptual loss (weight λ2=0.01), with full details available in section 3 in the Supplementary Material. Finally, the learning rate was set to 104, and the optimization phase was conducted using an NVIDIA RTX4090 GPU.

    3.2 Effectiveness Validation of Proposed Tolerance Analysis Method

    Utilizing the physics-informed tolerance analysis methodology, we conducted the manufacturing error boundary analysis for the off-axis three-mirror system. The detailed parameter settings can be found in section 4.2 in the Supplementary Material. We started by analyzing the manufacturing error limitations for each freeform mirror, using the minimum MTF>0.3 as the evaluation criterion. After conducting multiple iterations of tolerance analysis, as illustrated in Fig. 5(a), the determined limitation values for the manufacturing error of M1, M2, and M3 are 1.9λ, 2.6λ, and 1.6λ, respectively. Based on these results, we sought to incorporate manufacturing errors for all freeform mirrors using Eq. (10). Following additional iterations of tolerance analysis, as shown in Fig. 5(b), the scale factor Ra for a traditional imaging system was found to be <0.014, with the manufacturing error boundaries for M1, M2, and M3 being 0.032λ, 0.042λ, and 0.02λ, respectively. This clearly highlights the indispensable requirement for nanoscale manufacturing precision in the absence of a computational optics strategy. By contrast, when employing the computational optics strategy, the scale factor Ra was determined to be 0.36, and the manufacturing error boundaries for M1, M2, and M3 were 0.675λ, 0.924λ, and 0.556λ, respectively, representing an almost 25-fold reduction in manufacturing precision.

    Tolerance analysis results. (a) The relationship between the minimum MTF@72 lp/mm and the manufacturing errors of each mirror individually. (b) The relationship between the minimum MTF@72 lp/mm and the scale factor Ra under incorporating manufacturing errors of all mirrors. We set the system MTF@72 lp/mm to be larger than 0.3.

    Figure 5.Tolerance analysis results. (a) The relationship between the minimum MTF@72 lp/mm and the manufacturing errors of each mirror individually. (b) The relationship between the minimum MTF@72 lp/mm and the scale factor Ra under incorporating manufacturing errors of all mirrors. We set the system MTF@72 lp/mm to be larger than 0.3.

    To validate the effectiveness of the proposed tolerance analysis method, we evaluated the performance of reconstructed images by focusing on the worst-performing case identified in the tolerance analysis. Specifically, as shown in Fig. 6(a), target images from 100 randomly selected fields of view were analyzed to compare the recovered MTF consistency between the Wiener filter algorithm and the proposed learned Wiener-based FOV_KPN neural network, a cascaded approach integrating the Wiener filter and a neural network. Figure 6(b) illustrates a good linear correlation between the recovered MTF values of both algorithms, confirming the feasibility of using the Wiener filter as a substitute for complex neural networks in computational optics tolerance analysis. The Wiener filter algorithm can effectively restore image details, whereas the cascaded deep learning algorithm addresses noise, artifacts, and residual blur introduced during the Wiener filter.

    Validation results of the proposed tolerance analysis method. (a) Target images from 100 randomly selected FOVs displayed on the right panel, along with detailed blocks from three FOVs showing recovery results from the Wiener filter and the proposed learned Wiener-based FOV_KPN neural network. (b) Comparison of recovered MTF values between the Wiener filter algorithm and the proposed learned Wiener-based FOV_KPN neural network, with the red isoline representing the fitting line.

    Figure 6.Validation results of the proposed tolerance analysis method. (a) Target images from 100 randomly selected FOVs displayed on the right panel, along with detailed blocks from three FOVs showing recovery results from the Wiener filter and the proposed learned Wiener-based FOV_KPN neural network. (b) Comparison of recovered MTF values between the Wiener filter algorithm and the proposed learned Wiener-based FOV_KPN neural network, with the red isoline representing the fitting line.

    3.3 Superiority Validation of Proposed Postprocessing Algorithm

    To showcase the advantages of the proposed learned Wiener-based FOV_KPN neural network, we devised an ablation experiment using the U-Net architecture, comprising three essential components: Wiener filter, FOV, and KPN units. Qualitative and quantitative comparisons are presented in Fig. 7 and Table 1. Additional comparisons are available in section 5.2 in the Supplementary Material.

    Assessment in simulation. Reconstruction performance, including PSNR (dB), SSIM, and MTF, is evaluated for four methods: U-Net, FOV_KPN, Wiener filter, and our proposed learned Wiener-based FOV_KPN. The reconstruction result of our proposed method is presented on the left side, whereas magnified results from different methods are displayed on the right side, with corresponding positions highlighted on the left side. (a) Image from DIV2K45" target="_self" style="display: inline;">45 dataset. (b) MTF test image.

    Figure 7.Assessment in simulation. Reconstruction performance, including PSNR (dB), SSIM, and MTF, is evaluated for four methods: U-Net, FOV_KPN, Wiener filter, and our proposed learned Wiener-based FOV_KPN. The reconstruction result of our proposed method is presented on the left side, whereas magnified results from different methods are displayed on the right side, with corresponding positions highlighted on the left side. (a) Image from DIV2K45 dataset. (b) MTF test image.

    • Table 1. Quantitative comparison of average PSNR (dB), SSIM, and frame rate over 30 unseen images in the DIV2K dataset45 with identical noise levels.

      Table 1. Quantitative comparison of average PSNR (dB), SSIM, and frame rate over 30 unseen images in the DIV2K dataset45 with identical noise levels.

      MethodWienerFOVKPNU-NetPSNR↑SSIM↑Frame rate (fps) ↑
      Blurred××××22.680.6116
      U-Net47×××30.070.88669.78
      FOV××30.570.89689.00
      KPN××31.610.91166.08
      Wiener filter33××32.780.92393.81
      FOV_KPN32×32.520.92325.69
      Wiener_FOV×32.910.92553.79
      Wiener_KPN×33.000.92703.07
      Ours33.180.92772.99

    We observed that the U-Net method,47 a conventional CNN-based recovery network, exhibited the poorest reconstruction performance and failed to achieve exceptional results across all domains due to its primary focus on addressing globally consistent blur. Conversely, incorporating FOV attention or KPN-based deep networks32 showed improvement in the recovery of nonuniformly degraded images, with average peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) improvements of 2.45 dB and 0.0366, respectively, but these methods still suffered from artifacts [see Fig. 7(a)]. Incorporating a learned Wiener deconvolution model33 also showed improvement in recovering nonuniformly degraded images, with average PSNR and SSIM improvements of 2.71 dB and 0.0373, but it still suffered from some residual blur. However, our proposed method, which includes all components: FOV attention, KPN block, and learned Wiener deconvolution model, achieved the highest average PSNR and SSIM (exceeding the state-of-the-art by 0.4 dB and 0.0038, respectively), effectively addressing both residual blur and artifacts.

    To evaluate imaging performance, we conducted the MTF assessment using a five-bar Nyquist frequency (72 lp/mm) target, consisting of five pairs of black and white bars. The MTF calculation is described in Eq. (9). Figure 7(b) presents the MTF values for center and edge fields of view under degradation and different recovery methods. In the degraded target image, the details are indistinguishable, with MTF values of 0.087 and 0.001 for the center and edge fields, respectively. Although data-driven neural networks (e.g., U-Net and FOV_KPN) can recover the overall target pattern, they fail to effectively restore fine details, achieving a maximum MTF value of only 0.197. This further highlights the limitations of traditional data-driven neural network approaches in effectively addressing image degradation caused by substantial manufacturing errors. By contrast, cascading the Wiener filter with neural networks significantly enhances the recovery of target details. Both the Wiener filter-based U-Net33 method and the proposed learned Wiener-based FOV_KPN method substantially improve MTF values, reaching 0.308 to 0.501 and 0.345 to 0.635, respectively. These results demonstrated the superior performance of the proposed method in mitigating residual blur and artifacts compared with state-of-the-art methods.

    Table 1 also presents the reconstruction frame rates of different post-processing algorithms. Our proposed method achieves the best reconstruction quality, with a frame rate of 2.99 frames per second (fps) when processing 1224×1024 images on an NVIDIA RTX4090 GPU. Although this is lower than the 9.78 fps achieved by U-Net, the frame rate can be further improved through model pruning and quantization, optimization of the Wiener filter deconvolution process, or the development of dedicated neural network hardware tailored to our algorithm. These improvements will be investigated in future work.

    These simulation results further validate the necessity of the proposed physics-informed design paradigm, which integrates a physics-informed tolerance analysis methodology with a physics-informed neural network. As shown in Fig. 7(b), traditional data-driven neural networks (e.g., U-Net and FOV_KPN) fail to restore high-frequency details when operating within the manufacturing error boundaries determined by the tolerance analysis. This highlights the critical role of incorporating a physics-informed neural network (e.g., learned Wiener-based FOV_KPN) to achieve superior restoration performance across both low- and high-frequency information.

    4 Experimental Assessment

    To experimentally validate the effectiveness of the proposed method in developing a manufacturing-robust imaging system with the computational optics strategy, we fabricated the off-axis three-mirror freeform all-aluminum system described in Sec. 3. This section details the manufacturing, assembly, and calibration processes for the prototype, followed by an evaluation of comprehensive performance, including laboratory measurements of system MTFs and a comparison of image quality with a commercial standard lens in real-world settings.

    4.1 Experimental Prototype

    We fabricated the freeform mirrors using the single-point diamond turning (SPDT) technology in a designed off-axis three-mirror system. The metrology results of manufacturing errors for these three freeform mirrors were obtained using the LuphoScan 420 HD instrument, as presented in Fig. 8(a). The RMS values of the freeform surface errors ranged from 0.025λ to 0.447λ, falling within the manufacturing errors boundary evaluation in the tolerance analysis as discussed.

    All-aluminum freeform mirror performance and the system assembly. First, we employed the SPDT technology to fabricate the freeform mirrors and their manufacturing errors are depicted in (a). We then assembled the system (b) with system wavefronts (d) and conducted a comprehensive PSF test across the entire FOVs (c).

    Figure 8.All-aluminum freeform mirror performance and the system assembly. First, we employed the SPDT technology to fabricate the freeform mirrors and their manufacturing errors are depicted in (a). We then assembled the system (b) with system wavefronts (d) and conducted a comprehensive PSF test across the entire FOVs (c).

    The designed off-axis three-mirror system was assembled in a laboratory, as shown in Fig. 8(b). The alignment of the three freeform mirrors was adjusted based on the system wavefront measured by a 4D Interferometer. The resulting system wavefront maps for both center and edge FOVs are presented in Fig. 8(d), with an RMS system wavefront error of 0.525λ for the center field and 1.022λ for the edge field, confirming a successful optical system alignment. The detailed alignment analysis can be found in section 4.6 in the Supplementary Material. Finally, a comprehensive PSF test across the entire FOV with 19×16 patches is shown in Fig. 8(c). The detailed analysis of the FOV division can be found in section 4.4 in the Supplementary Material.

    4.2 Experimental Results

    To validate the high-quality imaging capability of our proposed prototype, we first fine-tuned the learned Wiener-based FOV_KPN neural network using the measured PSFs and subsequently conducted a comprehensive performance evaluation in both laboratory and real-world settings.

    Figure 9(a) presents the MTF test for our prototype across different fields of view conducted in the laboratory. In the left panel, noticeable degradation is observed in the test pattern without image reconstruction, with MTF values ranging from 0.027 to 0.168. However, employing the image reconstruction model resulted in remarkable pattern recovery, with MTF values of the reconstructed test pattern images consistently exceeding 0.44, as shown in the right panel. It is worth noting that the experimental MTFs were slightly higher than the simulation results (0.440 to 0.714 versus 0.345 to 0.635 at 72  lp/mm), which can be attributed to the prototype’s freeform surface manufacturing errors being smaller than those of the worst-performing case in the tolerance analysis (0.025λto 0.447λ versus 0.556λ to 0.924λ in RMS).

    Experimental assessment. We evaluated the proposed method in both indoor (a) and outdoor (b) environments. In the indoor setting, we acquired and reconstructed five-bar Nyquist frequency target images from nine different fields, with MTF values displayed on each detail block, as shown on the left of (a). In the outdoor setting, we included images with detail blocks in different fields of a sensor capture from a Canon lens [on the left of (b)], a degraded measurement, and a recovery result from our proposed prototype [on the middle and right of (b), respectively].

    Figure 9.Experimental assessment. We evaluated the proposed method in both indoor (a) and outdoor (b) environments. In the indoor setting, we acquired and reconstructed five-bar Nyquist frequency target images from nine different fields, with MTF values displayed on each detail block, as shown on the left of (a). In the outdoor setting, we included images with detail blocks in different fields of a sensor capture from a Canon lens [on the left of (b)], a degraded measurement, and a recovery result from our proposed prototype [on the middle and right of (b), respectively].

    Figure 9(b) showcases the real-world images captured by our prototype. We provided a clear image captured using a commercial standard lens (Canon EF-S 60  mmf/2.8 USM) with the same focal length as the reference ground truth. The F-number of the standard lens was set to 6 to maximize its image quality, resulting in a diffraction-limited PSF size of 1  pixel. Our prototype exhibited enhanced sharpening information both in the on- and off-axis regions. Furthermore, our method produced no additional artifacts, as confirmed by a comparison with the standard lens. Additional results from different scenes are available in section 6 in the Supplementary Material.

    5 Conclusion

    We have presented a physics-informed design paradigm for manufacturing-robust imaging systems with computational optics. This is achieved by integrating a physics-informed tolerance analysis methodology with a physics-informed neural network-based post-processing algorithm. The physics-informed tolerance analysis methodology utilizes the Wiener filter to reconstruct image details during the tolerance analysis process, enabling computationally efficient evaluation of manufacturing error boundaries in computational optics. In addition, the physics-informed neural network combines a learned Wiener filter with an FOV_KPN neural network, effectively addressing significant spatially variant degradation caused by substantial manufacturing errors. As such, we have designed and prototyped a manufacturing-robust imaging system in an off-axis three-mirror freeform all-aluminum design, achieving a 25-fold reduction in manufacturing precision with a machining precision of only 0.5λ RMS. The system demonstrated excellent performance in both indoor and outdoor scenarios, with MTFs exceeding 0.44 at the Nyquist frequency. We envision this concept to lay the foundation for modern optical systems, offering higher processing efficiency, lower processing costs, and superior imaging capabilities.

    Acknowledgments

    Acknowledgment. This work was supported by the National Natural Science Foundation of China (Grant Nos. 62192774, 62105243, 61925504, 6201101335, 62020106009, 62192770, 62192772, 62105244, 62305250, and 62322217), the Science and Technology Commission of Shanghai Municipality (Grant Nos. 17JC1400800, 20JC1414600, and 21JC1406100), the Shanghai Municipal Science and Technology Major Project (Grant No. 2021SHZDZX0100), and the Fundamental Research Funds for the Central Universities.

    Yujie Xing received his BS degree from Tongji University, Shanghai, China, in 2018. Recently, he is a PhD candidate at the Institute of Precision Optical Engineering in Tongji University. His research interests include optical systems, computational imaging, and hyperspectral sensing.

    Xiong Dun received his PhD from Beijing Institute of Technology, Beijing, China. He is a researcher of optical engineering at the Tongji University, Shanghai, China. His research interests include computational imaging and the related technology, especially the joint design between optical systems and image processing.

    Dinghao Yang received his BE degree from Tongji University, Shanghai, China, in 2022. Recently, he has been a graduate student at the Institute of Precision Optical Engineering in Tongji University. His research interests include optical systems and computational imaging.

    Siyu Dong received his PhD from Tongji University, Shanghai, China. He is an assistant professor at the Institute of Precision Optical Engineering in Tongji University now. His research interests include micro/nano-optics and fabrication technology, especially optical metasurface devices and systems.

    Yifan Peng received his MSc degree in optical science and engineering from Zhejiang University, Hangzhou, China and his PhD in computer science from University of British Columbia, Vancouver, Canada. He is currently an assistant professor at the University of Hong Kong. His research interests include incorporating optical and computational techniques for enabling new imaging modalities. He has been working on computational imaging and display with wave optics.

    Xuquan Wang received his PhD in microelectronics and solid state electronics from University of Chinese Academy of Sciences, Beijing, China. He is an assistant professor of optical engineering at the Tongji University, Shanghai, China. His current research interests include computational imaging, hyperspectral sensing, and computer vision.

    Jun Yu received his PhD from Tongji University. He is an assistant professor at the Tongji University, Shanghai, China. His research interests include ultra-precision machining and measurement of metal optical elements.

    Zhanshan Wang is the founder of the Institute of Precision Optical Engineering Technology, Tongji University, Shanghai, China. He is a member of the 10th Shanghai Party Congress, a SPIE Fellow of the International Society of Optics and Photonics, and a member of the Council of the Chinese Optical Society. He is mainly engaged in high-performance thin films, micro and nano optics, precision imaging, and X-ray optics and technology research.

    Xinbin Cheng received his BS degree and PhD from Tongji University, Shanghai, China, in 2004 and 2008, respectively. He is a doctoral supervisor and a professor at the School of Physical Science and Engineering, Tongji University. He was supported by the National Science Foundation for Distinguished Young Scholars and was awarded the National Technology Invention Award (Second Class). His current research interests include XUV multilayers, high-power laser coatings, nanometrological transfer standards, and computational imaging.

    [26] et alThe Hubble Space Telescope optical systems failure report(1990).

    Tools

    Get Citation

    Copy Citation Text

    Yujie Xing, Xiong Dun, Dinghao Yang, Siyu Dong, Yifan Peng, Xuquan Wang, Jun Yu, Zhanshan Wang, Xinbin Cheng, "Retained imaging quality with reduced manufacturing precision: leveraging computational optics," Adv. Photon. Nexus 4, 046014 (2025)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Research Articles

    Received: Mar. 4, 2025

    Accepted: Jun. 23, 2025

    Published Online: Jul. 25, 2025

    The Author Email: Xiong Dun (dunx@tongji.edu.cn), Xuquan Wang (wangxuquan@tongji.edu.cn), Jun Yu (yujun-88831@tongji.edu.cn)

    DOI:10.1117/1.APN.4.4.046014

    CSTR:32397.14.1.APN.4.4.046014

    Topics