1State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China
2Faculty of Computing, Harbin Institute of Technology, Harbin 150001, China
3Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou 310000, China
The demand for low-cost, high-performance miniaturized optical imaging systems requires creating a new imaging paradigm. In this paper, we propose an imaging paradigm that achieves diffraction-limited imaging with a non-imaging spatial information transfer lens. The spatial information transfer lens realizes a perfect match between the space–bandwidth product (SBP) of the lens and that of the image sensor so that the collected spatial information from the object can be totally recorded and fully resolved by the image sensor. A backward wave propagation model is developed to reconstruct the object by propagating the light wave modulated by the information transfer lens back from the image space to object space. The proposed imaging paradigm breaks the point-to-point imaging structure and removes the focusing-distance constraint, allowing a flexible arrangement of the object and the image sensor along the optical axis with a compact form factor of the optical system. We experimentally demonstrate the versatility and effectiveness of the proposed imaging paradigm. The proposed imaging paradigm is low-cost, simple in configuration, flexible in arrangement, and diffraction limited with great potential applications in biomedical imaging.
【AIGC One Sentence Reading】:We propose a non-imaging spatial information transfer lens for diffraction-limited imaging, achieving low-cost, flexible, high-performance optical systems with potential in biomedical imaging.
【AIGC Short Abstract】:This paper introduces a novel imaging paradigm utilizing a non-imaging spatial information transfer lens to achieve diffraction-limited imaging. By perfectly matching the space–bandwidth product of the lens and image sensor, it ensures complete spatial information capture and resolution. The backward wave propagation model reconstructs objects, eliminating the need for point-to-point imaging and focusing-distance constraints. This versatile, low-cost, simple, and flexible approach holds great potential for biomedical imaging applications.
Note: This section is automatically generated by AI . The website and platform operators shall not be liable for any commercial or legal consequences arising from your use of AI generated content on this website. Please be aware of this.
1. INTRODUCTION
An imaging-forming optical system, which relies on point-to-point imaging structure, aims to bring the light rays received from all object points within a desired field of view to their corresponding image points. Optical aberration is a critical factor affecting the performance of an image-forming system. Optical design optimization can be considered as the process of finding an optimal solution of an extremely overdetermined problem, by which a group of optical elements and their spatial relationships are optimized to minimize optical aberrations and yield an acceptable image quality over a desired field of view. This usually leads to complex stacks of optical elements due to the overdetermined nature of optical aberration correction. With the development of computer science, the burdens of optical systems have been shifted to computational reconstruction, which allows us to develop imaging systems with reduced optical complexity.
Computational imaging involves the joint design of imaging systems and computational algorithms to create novel imaging systems with unprecedented capabilities [1]. A typical computational imaging system mainly consists of two dependent components: optical encoding and digital decoding [2]. The optical encoding transfers the light waves from an object into intensity measurements. The digital decoding recovers the desired physical quantity using the intensity measurements. The optical encoding and digital decoding are related by a physical model and optimized jointly to extract the object information of interest. A key challenge facing optical encoding is how to modulate the captured spatial information from the object in a desired manner [3]. Various techniques have been developed to perform optical encoding, such as point-spread-function (PSF) engineering [4,5], coded diffractive masks [6–8], and diffusers [9–11]. The PSF engineering is accomplished by tailoring PSFs of a traditional imaging system in a computationally invertible form so that a high-quality image can be reconstructed by PSF deconvolution [12–14]. A concerning issue in PSF tailoring of a traditional imaging system is that a certain number of optical surfaces are still needed for aberration control, which means reduction in optical complexity is limited. Diffractive masks, which are widely utilized in lensless imaging to achieve a compact form factor, can be broadly divided into amplitude masks and phase masks [15–19]. A coded amplitude mask (e.g., a coded aperture in X-ray imaging) is an array of perforations in a specific arrangement [20]. The reduced light throughput and the limited achievable PSFs caused by diffraction effects are two major issues facing the coded amplitude masks. A coded phase mask, which is generally a diffractive optical element (DOE), reshapes the phase of incident light with higher light throughput compared to amplitude masks. However, it may still be a great challenge and costly to fabricate a large-scale (e.g., centimeter scale) phase mask with minor errors. Moreover, the unavoidable speckle noise and stray light mainly caused by the random phase profiles and fabrication errors of the coded phase mask are knotty issues in lensless imaging, which inevitably reduces the signal-to-noise ratio and limits the achievable resolution of lensless imaging systems [21–23]. A diffuser can be considered as a pseudorandom phase mask [24]. The optical memory effect is an important characteristic of a diffuser, which enables us to reconstruct an object with speckle autocorrelation [25]. However, the field of view (FOV) of the optical memory effect is very limited due to the nature of a diffuser. To break this limitation for a larger FOV, one needs to measure the light-field transmission matrix based on diffuser encoding [26]. High-resolution imaging requires a large transmission matrix [27]. However, the experimental measurement of a large transmission matrix is rather time-consuming and re-measurement of the transmission matrix is still needed once either object distance or image distance is changed. Besides, diffusers also suffer from speckle noise due to the pseudorandom phase profiles, and the SBP of the scattered light beam cannot match that of the image sensor, inevitably resulting in a large amount of losses in the spatial information of the object.
The demand for low-cost, high-performance miniaturized optical imaging systems requires creating a new imaging paradigm. In the current work, we propose a computational imaging paradigm that achieves diffraction-limited imaging with a non-imaging spatial information transfer lens. The proposed computational imaging paradigm, which does not rely on either aberration control or PSF tailoring, combines the following advantages: (1) the proposed imaging paradigm enables both large-scale and fine modulations of the phase distribution of a light wave without speckle noise and stray light; (2) the proposed imaging paradigm is low-cost, simple in configuration, flexible in arrangement, and diffraction limited with a compact form factor; (3) the proposed backward wave propagation model provides a powerful tool for wave optics modeling of advanced optical systems. We demonstrate the versatility and effectiveness of the proposed imaging paradigm via the diffraction-limited imaging of a standard test chart and a mouse ovarian tissue.
Sign up for Photonics Research TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now
2. METHODS
A. Introduction to the Proposed Computational Imaging Paradigm
The proposed computational imaging paradigm is schematically shown in Fig. 1(a). A coherent light beam with a known complex field illuminates the object, and the modulated light beam is then collected and delivered by a spatial information transfer lens. The transfer lens is essentially a non-imaging lens without aberration control and PSF tailoring. Both the entrance and exit surfaces of the transfer lens are smooth optical surfaces with rotational symmetry, which are very easy to fabricate. The spatial information transfer lens has two essential functions in the proposed imaging paradigm: (1) it provides a perfect match between the SBP of the lens and that of the image sensor so that the collected spatial information from the object can be totally recorded and fully resolved by the image sensor; (2) it enables both large-scale and fine modulations of the phase distribution of a light wave without speckle noise and stray light, which are knotty issues in lensless imaging that employs pixelated masks. The light beam after propagating through the transfer lens is captured by an image sensor. The proposed imaging paradigm breaks the point-to-point imaging structure and removes the focusing-distance constraint, allowing a flexible arrangement of both the object and the image sensor with a compact form factor of the optical system. Since only the intensity of the outgoing light wave is recorded due to the nature of an image sensor, two intensity measurements are taken and a dual plane phase retrieval algorithm is used to retrieve the complex field of the light wave in image space. Then, a backward wave propagation model is developed to reconstruct the object by propagating the light wave modulated by the information transfer lens back from the image space to object space. With the proposed computational imaging paradigm, a diffraction-limited reconstruction of the object can be achieved. We will introduce the proposed imaging paradigm in more details in the rest of this paper.
Figure 1.Overview of the proposed computational imaging paradigm. (a) Architecture of the proposed imaging system. A transfer lens is placed between the object and the image sensor to collect and deliver the object information. (b) Prototype of the proposed imaging paradigm. (c) The object is reconstructed from the measured intensity patterns by use of a backward wave propagation model.
B. Design of the Spatial Information Transfer Lens
As mentioned above, the spatial information transfer lens is such that the spatial information from the object collected by the lens can be totally recorded and fully resolved by the image sensor. To achieve this goal, the quantity of information collected by the transfer lens should be equal to that of the information recorded by the image sensor. That also means the SBP of the lens should not be greater than that of the image sensor. Figure 2(a) shows the design process of this spatial information transfer lens. Given a target FOV and numerical aperture (NA) of the transfer lens, which determines the resolution of the imaging system, we calculate the SBP of the lens. To fully resolve the spatial information recorded by the image sensor, the key parameters of the image sensor should satisfy where is the pixel size of the image sensor; is the shortest side length of the photosensitive area of the sensor; can be considered as the image size. After the image sensor is chosen, the maximum frequency of the information that can be resolved by the sensor is fully determined by the pixel size . The angular spectrum theory tells us that a light wave with any specified spatial structure can be decomposed into a combination of plane waves with different directional angles and amplitudes. To fully record and resolve all the modulated object information collected by the transfer lens, the maximum spread angle of the outgoing light beam after refraction by the transfer lens should meet the condition defined by where is the wavelength; and , respectively, denote the angle between the outgoing ray and the -axis and -axis. In the examples given below, we make the SBP of the transfer lens equal that of the image sensor to achieve a perfect match between the transfer lens and the image sensor. It should be noted that Eq. (2) imposes a relatively strict constraint on the propagation of the outgoing light beam, because the maximum frequency component produced by the interference of the outgoing waves on the image sensor could be less than the one determined by Eq. (2). That gives us some more degrees of freedom to design the transfer lens.
Figure 2.The optimal spatial information transfer lens. (a) Design process of the spatial information transfer lens. (b) Ray tracing of the transfer lens. (c) Spot diagrams on the observation plane. (d) Intensity distributions at sampled points on the observation plane. (e) Retrieved spectrum distributions in image space in the two examples. (f) Fabricated transfer lens. (g) Surface error maps of the entrance and exit surfaces of the transfer lens.
In the proposed imaging system shown in Fig. 1(b), the diameter of the circular FOV is 3 mm and the NA of the transfer lens equals 0.5, which means the resolution of the system equals 0.532 μm at a wavelength of 532 nm. The sensor size is and the number of pixels equals . The pixel pitch equals 3.2 μm, suggesting that the maximum spread angle, , of the outgoing beam should be less than 4.8°. The working distance between the object plane and the entrance surface of the lens is 13.5 mm, and the lens material is NBK7. It should be mentioned that a typical microscope objective with a numerical aperture of 0.5 usually comprises stacks of lenses for aberration control. The situation is quite different in the design of the proposed information transfer lens, because no aberration needs to be controlled. What we need to do is to make the spread angle of the outgoing beam less than 4.8°. Obviously, it is much easier to control the maximum spread angle compared to aberration control. When we optimize the transfer lens, several field points are sampled and several tangential and meridional rays emanating from the field points are traced. Then, the spread angle of the outgoing rays is evaluated. Since neither the aberration control nor PSF tailoring is required, the lens surface profiles can be very simple. It is worth mentioning that the transfer lens can also be easily optimized at multiple wavelengths over a certain wavelength range (e.g., the visible spectrum) because correction of the chromatic aberrations of the transfer lens is not needed either. That means the proposed imaging paradigm can also be applied to chromatic imaging. In the proposed imaging system, the entrance surface of the lens is a spherical surface, and the exit surface is a conic surface. After simple optimization, the lens thickness equals 9.7 mm and the clear aperture is 16.52 mm. The other optimized parameters of the lens surfaces are given in Table 1. Figure 2(b) shows the ray tracing of the optimized transfer lens. The red, green, and blue solid lines denote the light rays emanating from the on-axis field point and two off-axis field points, respectively. Figure 2(c) gives the spot diagrams produced by three sampled field points on an observation plane. The distance between the observation plane and the exit surface of the lens equals 3 mm. From Fig. 2(c) we see that the root-mean-square (RMS) spot diameter is around 14 mm, telling us again that the proposed information transfer lens is not a typical image-forming lens. Since we remove the focusing-distance constraint and impose a constraint on the maximum spread angle of the outgoing beam, this allows a flexible arrangement of both the object and the image sensor along the optical axis with a compact form factor of the optical system. In order to investigate the incident angle distribution of the incident rays at each sampled point on the observation plane, a circular isotropic source disk with diameter of 3 mm is placed at the object plane, and non-sequential ray tracing is performed to evaluate the intensity distribution at each sampled point on the observation plane. Figure 2(d) gives the intensity distributions at the points (0,0), (0,3), (0,7), and (0,9). From Fig. 2(d) we clearly observe that the angles of incidence at each sampled point, which are fully determined by the pixel size of the image sensor, are no more than 4.8°. Figure 2(e) gives the retrieved spectrum distributions in image space in the two examples, which will be introduced in Section 3. This figure tells us that the highest retrieved frequency, which is also fully determined by the pixel size of the image sensor, is around 156 lp/mm. From Figs. 2(d) and 2(e) we know that the spatial information from the object collected by the transfer lens can be totally recorded and fully resolved by the image sensor. Figure 2(f) gives the information transfer lens fabricated by grinding and polishing. Detailed fabrication process of the designed lens can be found in Appendix D. The surface error maps of the entrance and exit surfaces of the lens are depicted in Fig. 2(g). From this figure we can see the peak-to-valley (PV) errors of the entrance and exit surfaces equal and , respectively. Here, , which is the wavelength of the light used to test the optical surface. It is of great interest to mention that the transfer lens can also be an optical plastic lens with smooth low-order aspheric surfaces, which is apparently low-cost, easy to fabricate, and suitable for mass production and commercialization.
Optimized Parameters of the Two Surfaces of the Spatial Information Transfer Lens
Entrance Surface
Exit Surface
Radius (mm)
42.3
−12
Conic constant
0
−0.89862
C. Image Reconstruction Using a Backward Wave Propagation Model
The incoming light wave captured by the transfer lens contains the object information, and is recorded by the image sensor. However, the image sensor can only detect the intensity (or amplitude) component of light but the phase information cannot be accessed directly. As we mentioned above, we develop a backward wave propagation model that performs a full path optical inverse diffraction calculation to reconstruct the object. A key step to perform the inverse diffraction calculation is to calculate a complex light field of the outgoing light wave behind the transfer lens. That means we need to obtain the phase information of the outgoing light wave. Here, we employ a dual plane phase retrieval algorithm to reconstruct the complex field of the outgoing light wave from two intensity measurements recorded by the image sensor. Compared with multi-plane phase retrieval algorithms [28–31], the dual plane phase retrieval algorithm only needs two intensity measurements of the outgoing light wave [32,33]. Consequently, the errors caused by the movements of the image sensor can be greatly reduced. For ease of calculation, the complex field of the outgoing wave in the tangential plane at the vertex () of the exit surface is reconstructed using two intensity measurements at and . The two planes at and are perpendicular to the optical axis of the lens, as shown in Fig. 3(a). It should be noted that an appropriate distance between the two planes should be chosen to guarantee sufficient diversity between the two intensity measurements. Figure 3(b) illustrates the dual plane phase retrieval algorithm. A weighted shrink-wrap constraint is added to the dual plane phase retrieval algorithm to speed up the convergence of phase retrieval, and a gradient descent method is used to suppress the influence of noise by minimizing the total variation metric. More detailed derivation and implementation of the dual plane phase retrieval algorithm can be found in Appendix A.
Figure 3.Key steps of the full path optical diffraction calculation. (a) Illustration of the two intensity measurements. (b) Flowchart of the dual phase retrieval algorithm.
Once the complex light field of the optical wave at is obtained, the next key step is to propagate the outgoing light wave back to the object plane through the transfer lens. Since the transfer lens is a thick lens, the property of space invariance or simple transmission function [34,35] that relies on thin lens approximation becomes invalid. To this end, we develop a backward wave propagation model that takes the lens thickness into account and propose a full path optical inverse diffraction calculation for image reconstruction. That means the outgoing light wave needs to propagate reversely from the tangential plane at the vertex of the exit surface to the object plane. The proposed inverse diffraction calculation is based on the angular spectrum theory due to its accuracy compared to the Fresnel diffraction formula. In terms of the different sampling intervals at the object and detection sides, an improved angular spectrum computation based on the Bluestein method is employed to recover the high frequencies from the lower frequencies at the detection side. The improved angular spectrum method allows us to calculate an arbitrary region of interest with an arbitrary sampling interval (see Appendix B). To propagate the outgoing light wave through the transfer lens, both the entrance and exit surfaces of the lens are divided by a set of cutting planes perpendicular to the optical axis with an unequal spacing between each two neighboring planes. The smaller the radius of curvature of the lens surface, the smaller the spacing between each two neighboring planes, and vice versa. Since the transfer lens is rotationally symmetric, the decomposition of the two surfaces yields a set of annular surface slices. We take the inverse diffraction calculation between the -th and ()-th cutting planes as an example, as shown in Fig. 4(a). The -th slice of the exit surface is projected onto the -th and ()-th planes, which yields two identical annuluses and in two different media shown in green in Fig. 4(b). It is apparent that the annulus projection lies in the air, and the annulus projection lies in the lens. Then, what we need to do is to propagate the light wave from the annulus to the annulus. Since both the annulus and the tangential plane lie in the air and are perpendicular to the optical axis, the diffraction calculation between these two parallel planes is straightforward [36–38]. The complex field of the optical wave on the annulus is given by with where the symbol “” represents the Hadamard product; is the back propagation operator (at the propagating distance ), which is modeled as angular spectrum propagation; and denote the spatial frequencies. Since the light wave is modulated by the -th slice when propagating from the annulus to the annulus, phase compensation should be included in the diffraction calculation of the complex field of the light wave on the annulus, which is defined by where denotes the -coordinate of a sample point on the lens surfaces; and represent the location of the -th and ()-th planes along the optical axis. The phase compensation term, which reveals the influence of surface profile on the propagation of a light wave, can greatly reduce the number of decomposed slices. The whole propagation through the information transfer lens is introduced in Appendix C. It is worth highlighting that we could use the measured surface profiles of the fabricated lens instead of the nominal ones for better object reconstruction when we perform the full path optical inverse diffraction calculation. This will make the proposed imaging paradigm insensitive to fabrication errors, and certainly simplify fabrication of the transfer lens. It is also of great interest to mention that the proposed full path optical diffraction calculation can also be applied to forward wave propagation, and could be a powerful tool for wave optics modeling of advanced optical systems.
Figure 4.Illustration of decomposition operation in the full path optical diffraction calculation. (a) Perspective view of the decomposed lens with a set of cutting planes. (b) Front view of the two identical annuluses and , which are the projections of the -th surface slice on the -th and ()-th plane, respectively. Since the two annuluses are identical, only one annulus is shown here for simplicity. (c) Side view of two identical annuluses on the -th plane and the ()-th plane and illustration of the phase compensation.
A slight deviation between the actual working distance and the nominal working distance cannot be avoided due to the fabrication and alignment errors when we build the imaging system. To achieve a high-quality image reconstruction, a digital refocusing algorithm is used here to find an optimal working distance. When the complex field on the object plane is reconstructed, we calculate the gradient of the reconstructed complex amplitude. Then, the singular value decomposition of the module of the gradient can be written as where represents the reconstructed complex amplitude at the object plane; and are the and components of the gradient; and are the matrices consisting of eigenvectors. The singular value fusion of the gradient (SVFG) [39] is employed here to quantify the quality of image reconstruction, which is given by where is the number of sample points of the object in one dimension. It is apparent that a larger value of SVFG represents a better image reconstruction. An interval that includes the nominal working distance should be provided for digital refocusing, and the length of the interval is determined by the resolution of the system. It is worth mentioning that the digital refocusing allows us to reconstruct a diffraction-limited image even if the working distance is not given.
3. RESULTS
A. Standard 1951 USAF Resolution Test Chart Reconstruction
To demonstrate both the effectiveness and versatility of the proposed imaging paradigm, we build a prototype shown in Fig. 1(b). A solid-state laser with wavelength of 532 nm is expanded and collimated to illuminate the object. A field stop is placed in front of the object. The collimated beam modulated by the object is collected and delivered by the information transfer lens. The outgoing light is captured by an image sensor (VlXT-650C.I; Baumer, Germany), which is installed on a -axis linear motorized stage (travel range: 100 mm; Daheng Optics, China) for the measurement of two intensity patterns. The recorded intensity patterns are monochrome images with 12-bit depth. The full path optical inverse diffraction calculation is carried out on a computer (CPU: AMD 3990X; RAM: 64G). First, the prototype of the proposed imaging system is used to observe a standard 1951 USAF resolution test chart. The diameter of the circular FOV equals 3 mm. Figure 5(a) shows two intensity measurements recorded at two planes. To achieve a good balance between the high signal-to-noise ratio and diversity of the two intensity measurements, the distance between the two planes equals 20 mm. From Fig. 5(a) we can see obvious diversity between the two intensity measurements. The amplitude distribution produced by the reconstructed optical wave in the tangent plane at the vertex of the exit surface is given in Fig. 5(b). It is apparent that we cannot identify the object from the intensity measurements taken in image space due to the fact that the information transfer lens is a non-imaging lens.
Figure 5.Reconstruction of the USAF 1951 resolution test chart. (a) Two intensity patterns recorded by the image sensor. (b) Amplitude distribution in the tangent plane at the vertex of the exit surface. (c) Change of SVFG. (d) Reconstructed object with the transfer lens. (e) Reconstructed object without the transfer lens. (f) Reconstructed object with thin lens approximation.
Both the entrance and exit surfaces of the transfer lens are divided into 200 slices. After the full path optical inverse diffraction calculation, we obtain the complex field in the object space. The digital refocusing algorithm gives us the optimal working distance between the object plane and the entrance surface of the transfer lens, which equals 13.501 mm, as shown in Fig. 5(c). This figure also indicates that it is very easy to find an optimal working distance because there is only an extreme point of SVFG. Figure 5(d) shows the reconstructed amplitude of the object. From this figure we can see that the Group 9 Element 2 of the test chart with a lateral resolution of 0.87 μm can still be resolved, indicating the effectiveness of the proposed imaging paradigm. As mentioned above, the target resolution of the system is 0.532 μm. The slight deviation between the actual resolution and the target is mainly caused by the fabrication errors of the transfer lens, the alignment errors of the imaging system, and the phase reconstruction errors.
In order to further illustrate the advantages of the proposed imaging paradigm, we remove the transfer lens, and therefore the proposed imaging system becomes a lensless imaging system. Two intensity measurements are taken, and the amplitude of the object is reconstructed, as shown in Fig. 5(e). From this figure we can see that the Group 7 Element 2 with a lateral resolution of 3.5 μm is resolved. It is worth mentioning that when the transfer lens is not used, the achievable resolution of the lensless imaging system is limited by the pixel size of the image sensor. Obviously, the achievable resolution of the proposed imaging paradigm is significantly enhanced, and is limited by the NA of the transfer lens. As mentioned above, the object is reconstructed by propagating the light wave back from the image space to object space, and a key step of the backward propagation is to propagate the light wave through the thick non-imaging transfer lens. To further demonstrate the effectiveness of the proposed backward wave propagation model, the transfer lens is replaced by a thin spherical lens, which is perhaps the most important optical element in Fourier optics. It is well known that the thin lens introduces a quadratic phase shift and the focal length of the thin lens can be calculated from the parameters of the lens given in Table 1. Figure 5(f) gives the reconstructed object, which is very blurred. The obvious difference shown in Figs. 5(d) and 5(f) demonstrates the effectiveness of the proposed backward wave propagation model. The proposed backward wave propagation model provides a powerful tool for wave optics modeling of advanced optical systems.
B. Biomedical Samples Reconstruction
In this section, the prototype shown in Fig. 1(b) is used to observe a mouse ovarian tissue. Due to the large field of view of the prototype, the entire region of interest (ROI) denoted by the yellow circle on the sample can be captured. Figure 6(a) gives the two intensity measurements used in the dual plane phase retrieval, and Fig. 6(b) shows the amplitude distribution produced by the reconstructed optical wave in the tangent plane at the vertex of the exit surface. Similarly, we cannot identify the object from the three blurred patterns. Figure 6(c) gives the reconstructed image of the ROI. Vignette high-resolution views of the reconstructed image are shown in Figs. 6(d)–6(g). The image shown in Fig. 6(d1) is reconstructed from the proposed imaging system. Figure 6(d2) shows a blurred image captured by a microscope objective with NA of 0.08. It is apparent that the mouse ovary cells cannot be resolved by this low-power objective. The mouse ovary sample is also observed by use of a commercial microscope (SOPTOP RX50M) with a objective of NA = 0.5. The image captured by the microscope is given in Fig. 6(d3). From Figs. 6(d1) and 6(d3) we can clearly see that the mouse ovary cells resolved by the proposed imaging system are comparable with those resolved by the microscope. It is also worth mentioning that the circular FOV of this commercial objective is 1.25 mm, suggesting that the proposed imaging system has a much larger FOV than the commercial objective. Figures 6(e), 6(f), and 6(g) show the reconstructed image of three other different ROIs. From these figures we can clearly observe the mouse ovary cells. This example tells us that the proposed imaging system generates high-resolution images only with a simple and low-cost spatial information transfer lens, again indicating both versatility and effectiveness of the proposed imaging paradigm.
Figure 6.Reconstruction of a mouse ovarian tissue. (a) Two intensity patterns recorded by the image sensor. (b) Amplitude distribution in the tangent plane at the vertex of the exit surface. (c) Full FOV image of the mouse ovarian cells. (d1), (e)–(g) Vignette high-resolution views of the image in (c). (d2), (d3) Images taken by a commercial microscope with a 2× (d2) and a 20× (d3) objective lens, for comparison.
In this section, we investigate the influence of several key factors on the quality of object reconstruction in the proposed imaging paradigm. As mentioned above, the lens surfaces need to be divided into a set of annular surface slices and the phase compensation should be made when we propagate the light wave back to the object plane. Thus, the first key factor is the number of annular surface slices generated from the surface decomposition, as shown in Fig. 7(a). How many annular surface slices should be used depends on the diameter of the lens aperture and the radius of curvature of the lens surface. The smaller the radius of curvature, the more the annular surface slices for better phase compensation. We take the transfer lens given in Fig. 2 as an example, and increase the number of annular surface slices from 10 to 50. The quality of the object reconstruction is greatly improved when the number is increased from 10 to 30, as shown in Fig. 7(a). This figure also shows that the quality of the object reconstruction is slightly improved when the number is further increased from 30 to 50, which tells us that a good balance could be achieved between the quality of the object reconstruction and computation cost.
Figure 7.Influence of key factors on the quality of image reconstruction. (a) Reconstruction results of the standard test chart with the number of decomposed slices of 10, 30, and 50. (b) Change of the reconstruction resolution when the decenter of the lens is increased from 0.01 mm to 0.05 mm. (c) Change of the reconstruction resolution when the tilt of the lens is increased from 0.1° to 0.5°.
As also mentioned above, alignment errors cannot be avoided when we build the imaging system. Thus, it is necessary to analyze the influence of both decenter and tilt of the information transfer lens on the quality of image reconstruction. Due to the rotational symmetry of the lens, we only investigated the decenter of the lens along the -axis and the tilt of the lens around the -axis. Two sets of line pairs with an interval of 775 nm and 1.55 μm are simulated. Figure 7(b) shows the change of the image reconstruction when the decenter of the lens is increased from 0.01 mm to 0.05 mm. From this figure we can see slight differences between the three reconstructed images. Figure 7(c) shows the change of the image reconstruction when the tilt of the lens is increased from 0.1° to 0.5°. From this figure we observed that the reconstructed image of the line pair with an interval of 775 nm is too blurred to identify when the tilt equals 0.5°. Figures 7(b) and 7(c) tell us that the image reconstruction is more sensitive to the tilt of the lens compared to decenter. This provides a good guidance for us to assemble the proposed imaging system. Figure 7 also indicates that the low-frequency components are less sensitive to the number of annular surface slices, the decenter, and tilt of the lens compared to the high-frequency components. It is worth mentioning that we can also take the alignment errors of the imaging system into account during the backward propagation of the light wave, making the proposed imaging paradigm insensitive to the alignment errors.
It should be mentioned that the quality of the object reconstruction is also determined by the phase retrieval errors. The phase retrieval errors can be reduced by use of a good initial phase, because a good initial phase can prevent the phase retrieval from being trapped in local minima. To analyze the influence of the initial phase on the reconstruction performance, a uniform phase and a random phase are, respectively, employed to initiate the phase retrieval, as shown in Figs. 8(a) and 8(b). The number of iterations for the dual plane phase retrieval algorithm is also 500, ensuring the convergence of each phase retrieval. From Fig. 8(a) we observe that the Group 9 Element 2 of the test chart can be roughly resolved with a lower signal-to-noise ratio compared to the result in Fig. 5(d) when a uniform phase is used. However, the Group 9 Element 1 cannot be resolved when a random phase is used, as shown in Fig. 8(b). Figures 5(d) and 8 tell us that final reconstruction performance is also determined by the phase retrieval errors. It means that the initial phase should be carefully chosen to reduce the phase retrieval errors. Fortunately, a good initial phase can be easily obtained by use of the forward full path optical diffraction calculation introduced above, as indicated by the final reconstruction result shown in Fig. 5(d).
Figure 8.Influence of the initial phase on the final reconstruction results. Final reconstruction results generated from (a) a uniform phase and (b) a random phase.
The two examples presented above clearly show the versatility and effectiveness of the proposed computational imaging paradigm. Both large FOV and high resolution are achieved only by use of a simple non-imaging spatial information transfer lens, which could be a big challenge for the other computational imaging paradigms that employ a single element (e.g., a diffuser, a phase mask) for optical encoding. The non-imaging transfer lens breaks the point-to-point imaging structure and removes the focusing-distance constraint, allowing a flexible arrangement of both the object and the image sensor. The proposed imaging paradigm could be insensitive to fabrication and alignment errors. Moreover, it is low-cost, simple in configuration, flexible in arrangement, and diffraction limited with a compact form factor. Although the proposed imaging paradigm is a powerful and promising computational imaging technique, its current form still suffers from a few limitations that need to be addressed in the future. Currently, the proposed imaging paradigm still needs two intensity measurements to recover a complex light field. A sophisticated coded transfer lens could be used to reshape the complex light field to realize single-shot imaging without moving the image sensor or the transfer lens. Moreover, high-resolution reconstruction requires a large number of surface slices and sampling points, which inevitably yields heavy computation. This can be solved by improving the angular spectrum method (e.g., we could calculate the corresponding frequency ranges for different annular slices and use a more accurate phase compensation term), or using a learned backward wave propagation model. In addition, the burdens of system calibration can be greatly reduced by developing an optimization algorithm to automatically calibrate the tilt and decenter of the transfer lens. Finally, it is of great interest to explore the wide applications of the proposed imaging paradigm in many different scenarios (e.g., biomedical imaging, cellphone microscopy) in the future.
APPENDIX A: DUAL PLANE PHASE RETRIEVAL ALGORITHM
The dual plane phase retrieval algorithm is employed in the proposed imaging paradigm to reconstruct the complex field of the outgoing wave from two intensity measurements. Multi-plane phase retrieval is a mature algorithm for phase information recovery; however, it usually stagnates at a local minimum when the number of planes is reduced. In this work, we add a weighted shrink-wrap constraint to the dual plane phase retrieval algorithm to speed up the convergence of the phase retrieval algorithm, and use a gradient descent method to suppress the influence of noise by minimizing the total variation metric. The dual plane phase retrieval algorithm with only two intensity measurements can reduce the measurement time and the errors caused by the translation of the image sensor. The flowchart of the dual plane phase retrieval algorithm is shown in Fig. 3(b), and more details are given below.
As shown in Fig. 9, two intensity measurements , are converted to the amplitudes and of the complex light fields. An initial complex amplitude is randomly set as at the plane . The complex light field propagates forward to two designated planes at and . Then, the two complex light fields in the two designated planes at the -th iteration can be written as where is the propagation operator. An amplitude replacement is conducted to impose constraints, which is represented by
Then, the two updated complex light fields are propagated back to the initial plane at , yielding two complex fields:
The gradient descent minimization based on total variance metric is added to to improve the signal-to-noise ratio. The smoothed complex fields can be written as where is the gradient step that equals 0.05. We take the average of and , which is given by . Then, we impose a weighted feedback constraint on the estimated complex field at the ()-th iteration, which can be written as where and are the weights. We assume that and , respectively, equal 0.7 and 0.5 in the examples. The correlation between the reconstructed amplitude and the captured amplitude in the plane at is employed to quantify the performance of the dual plane phase retrieval, which is given by where and represent the normalized reconstructed amplitude and the captured amplitude in the plane at , respectively, and are the mean values of the normalized reconstructed amplitude and the captured amplitude, and is the total number of pixels. A larger value of correlation represents a better phase recovery. We could obtain the complex field in the tangential plane at once the iteration gets saturated. Figure 10 shows the convergence of the dual plane phase retrieval. From this figure we can see the iteration converges stably and gets saturated at the 500th iteration.
Figure 9.Two intensity patterns are measured to recover the complex amplitude.
APPENDIX B: FOURIER TRANSFORM BASED ON THE BLUESTEIN METHOD
Due to the different sampling intervals at the object and detection sides, an improved angular spectrum computation based on the Bluestein method is employed here to recover the high frequencies from the lower frequencies at the detection side. In addition, the decomposed annuluses belong to different regions on the lens surface, and accordingly correspond to different frequency ranges of spatial information. In order to reduce the computational costs, calculation of an arbitrary region of interest with an arbitrary sampling interval is necessary. Here, we take the inverse Fourier transform of based on the Bluestein method as an example, which is given as where . Here, , , , and , respectively, represent the sampling interval in the spectrum and space domain. The space interval can be changed by adjusting the scaling factor . We define the back propagator (in Section 2) using (inverse) Fourier transform based on the Bluestein method as .
APPENDIX C: FULL PATH OPTICAL DIFFRACTION CALCULATION METHOD
Since the complex light field at has been obtained, the wave propagation through the designed lens is a key step in the full-path optical inverse diffraction calculation. A diagram of the lens decomposition is shown in Fig. 11. It should be noted that the cutting planes used to divide the lens [Fig. 4(a)] are not depicted in Fig. 11. A virtual plane is placed at between the entrance and the exit surfaces of the lens. Then, the complex fields on all annular projections of the exit surface slices are propagated back to this virtual plane, producing a complex field on this virtual plane. Here, the exit surface of the lens is divided into L slices and the entrance surface of the lens is divided into slices. For the -th (-th) slice of the exit (entrance) surface, two identical annuluses are formed by projection of the slice onto the two neighboring cutting planes. The annular projection on the right cutting plane is denoted by () and the projection on the left cutting plane is denoted by (). There are two annuluses on each cutting plane except the first and the last cutting planes of the lens surface, which are the tangential planes at the vertexes of the entrance and exit surfaces of the lens.
Figure 11.Illustration of the lens decomposition: the annuluses projected from the slices to the adjacent planes are marked in green to better show the full path diffraction calculation process.
As shown in Fig. 11, it is apparent that the annulus and the tangential plane lie in the air and are perpendicular to the optical axis; the diffraction calculation between the two parallel planes is straightforward. The complex light field on the annulus is given by where represents the location of the -th cutting plane along the optical axis; is the back propagation operator based on the Bluestein method as depicted above; and denote the inner radius and outer radius of the annulus, respectively. It is worth mentioning that if the transfer lens is not rotationally symmetric (e.g., a freeform lens), the calculation can also be conducted by modifying :
Since the complex field on the annulus is modulated by the -th surface slice, phase compensation should be included in the diffraction calculation, which is given by where denotes the -coordinate of the sampled points on the exit surface. Then, the complex field on each annular projection of all the exit surface slices is propagated back to the virtual plane, yielding a complex field on the virtual plane, which can be written as
It can be seen from Eq. (C4) that the computation costs are greatly reduced by use of this virtual plane, because we have to propagate the light field on all the annuluses to one of the annuluses successively without this virtual plane. Since both the virtual plane and the annuluses lie inside the lens and are perpendicular to the optical axis, the complex amplitude of the optical wave on the annulus is given by where and represent the inner and outer radii of the annulus, respectively. Similarly, the complex light field on the annulus is also determined by adding a phase compensation term to the complex light field on the annulus, which can be written as where denotes the -coordinate of the sampled points on the entrance surface. A sum of the complex light fields propagated back from all the annuluses is calculated on the tangential plane () at the vertex of the entrance surface for the digital refocusing:
We can see that the full path optical diffraction calculation method is basically composed of angular spectrum calculation and phase compensation. This backward wave propagation model offers an accurate and universal way to simulate the optical wave propagation through thick lenses. It is of great interest to mention that the proposed full path optical diffraction calculation can also be applied to forward wave propagation, and could be a powerful tool for wave optics modeling of advanced optical systems.
APPENDIX D: FABRICATION OF THE DESIGNED LENS
The designed information transfer lens was fabricated by grinding and polishing. The two surface profiles of the lens were measured by TaYlor hobsom with a detection accuracy of . Anti-reflective films were coated on both entrance and exit surfaces of the lens to reduce stray light.
[1] A. Bhandari, A. Kadambi, R. Raskar. Computational Imaging(2022).