High-quality wide-angle holographic content is at the heart of the success of near-eye display technology. This work proposes the first digital holographic (DH) system enabling recording wide-angle scenes assembled from objects larger than the setup field of view (FOV), which can be directly replayed without 3D deformation in the near-eye display. The hologram formation in the DH system comprises free space propagation and Fourier transform (FT), which are connected by a rectangular aperture. First, the object wave propagates in free space to the rectangular aperture. Then, the band-limited wavefield is propagated through the single lens toward the camera plane. The rectangular aperture can take two sizes, depending on which DH operates in off-axis or phase-shifting recording mode. An integral part of the DH solution is a numerical reconstruction algorithm consisting of two elements: fringe processing for object wave recovery and wide-angle propagation to the object plane. The second element simulates propagation through both parts of the experimental system. The free space part is a space-limited angular spectrum compact space algorithm, while for propagation through the lens, the piecewise FT algorithm with Petzval curvature compensation is proposed. In the experimental part of the paper, we present the wide-angle DH system with FOV , which allows high-quality recording and reconstruction of large complex scenes.
【AIGC One Sentence Reading】:This work introduces a digital holographic system for wide-angle scene recording, enabling high-quality holographic content replay in near-eye displays without 3D deformation, utilizing advanced propagation and reconstruction algorithms.
【AIGC Short Abstract】:This study introduces a novel digital holographic system for wide-angle scene recording, exceeding the setup's field of view. It ensures undistorted replay in near-eye displays. The system integrates free-space propagation and Fourier transform, linked via a rectangular aperture, enabling high-quality capture and reconstruction of large, intricate scenes. The experimental setup demonstrates a 25°×19° field of view, showcasing the system's capabilities.
Note: This section is automatically generated by AI . The website and platform operators shall not be liable for any commercial or legal consequences arising from your use of AI generated content on this website. Please be aware of this.
1. INTRODUCTION
Augmented and virtual reality (AR/VR) is increasingly reaching consumers, mainly for entertainment purposes, although there is a space for application in other areas of life. This motivates the continued development of 3D displays [1]. Among modern 3D imaging techniques, holography is considered the most promising and forward-looking since it involves recording and reproducing the optical field generated by real objects. Therefore, the holographic image is able to reproduce all the cues of three-dimensional perception and is free of the accommodating-convergence problem. The most promising solutions for AR/VR applications are holographic near-eye displays (HNEDs) [2], in which the generated content and projected image must match the parameters of the human eye’s vision. Holographic near-eye displays show great commercialization potential, which can only be unleashed if HNEDs can deliver high-quality holographic content in a wide field of view (FOV).
For this to happen, both the content and display sides must provide high image quality in a large FOV. HNEDs configurations mainly rely on high-resolution spatial light modulators (SLMs), which are capable of reproducing a high-quality image, and by using optical magnification [3] they enable wide FOV. Unfortunately, wide-angle content is limited to numerical encoding of 3D models using the computer-generated hologram (CGH) technique. Current CGH algorithms can generate nonparaxial holograms of objects larger than the dimensions of the generated hologram [4–7], but the computational effort increases for the larger FOV and high image quality. Willing to provide a real-like representation of the object, the CGH algorithms must incorporate visual effects that occur in real life. The only technique with such capabilities is registration on holographic plates, where the obtained quality of the reconstructed image is considered a gold standard. The available CGH algorithms support some of the visual effects, such as nonuniform scene illumination [4,8], occlusion culling [9–11], and shadows [4,8]. On the other hand, including other visual effects, e.g., reflections, dispersion, and semitransparency [12], having a high-quality image is still challenging. Implementing each of these visual effects increases the CGH algorithm’s complexity and computational effort. Therefore, further extensive research in this area is required to achieve the gold standard of image quality with CGH.
Sign up for Photonics Research TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now
The other approach in holographic content generation, researched concurrently with the CGH algorithms, is digital holography (DH) [13–15]. The DH employs the same principles as classical holography and, by those means, has the potential to reach the holographic gold standard. However, the development of DH is not as dynamic as CGH techniques due to current hardware restrictions, which is a limited spatial resolution of the CCD/CMOS sensors. In most popular DH techniques, Fresnel and lensless Fourier setups, the field angle is given by , where is the wavelength, and is the camera pixel pitch. In these classical configurations, the obtained FOV is small; e.g., as reported in Refs. [16–18], it is . The narrow FOV limits the applicability of DH in near-eye holographic displays. Thus, further research for widening FOV is needed.
In the present state of the art, there are several approaches to extend the capabilities of DH capturing systems. A hologram’s spatial resolution, which is limited by sensor size, can be increased by employing the synthetic aperture concept [18,19]. DH usually uses coherent light sources, which introduces speckle noise that reduces the quality of the hologram. Some approaches for speckle noise reduction are numerical filtering or averaging multiple holograms [17,20,21]. Another approach is to use partially coherent illumination [22–24]. In terms of FOV expansion, the DH capture system can use concave lenses as in Ref. [25]. There, the field angle is increased by demagnifying the object for the hologram formation, but the angle is still limited by the used paraxial approximation, and applied object magnification results in a distorted image. The other approach for extending FOV is the employment of far infrared illumination [26], but the used wavelength causes mismatch problems when applied to HNEDs [27,28]. In addition, all of the above solutions have a significant drawback, which is the limitation of the recorded scene. The recorded scene must be a single object smaller than the system FOV. This limitation is related to aliasing encoded in holographic fringes. To the best of our knowledge, no work addresses this issue. The registration of wide-angle holograms of deep, large, and complex scenes exceeding the system FOV is a key feature to unlock the full potential of the DH when applying it to HNEDs. For example, this allows adding to the hologram such important, complementary elements of the real scenes as the ground, background, or other objects of arbitrary location.
This work presents a novel concept of a wide-angle DH (WADH) setup with the capability of extended FOV and aliasing-free registration, enabling holographic recording of large and complex scenes. The proposed solution consists of a novel optical holographic system for hologram recording as well as a new processing path that allows numerical reconstruction of the recorded scene. The WADH recording system comprises two parts: Fourier transforming with a single lens and wide-angle free space propagation. The first part implements the Fourier transform (FT) between the camera plane and the corresponding Fourier plane, where a rectangular aperture is placed for spatial limitation of the object wave. The second part is the wide-angle propagation in free space between the object and the Fourier plane. The Fourier plane in our setup is equivalent to the hologram plane of the classical DH approach. Therefore, the holographic signal from that plane can be a source of content for a holographic near-eye display.
The advantage of the WADH technique is that the discrete property of its Fourier plane defines the FOV of the system and can be modified by the focal length of the Fourier lens. The system can record holographic fringes in two modes, i.e., off-axis and phase-shifting. In off-axis mode, the aperture size in the Fourier plane is reduced, and, in consequence, such off-axis hologram reconstruction has a lower resolution. For hologram reconstruction, the numerical reconstruction algorithm, which consists of two elements, fringe processing for object wave recovery and wide-angle propagation to the object plane, is developed. Depending on the hologram capturing mode, the fringe processing is different; it contains frequency filtering and arithmetic fringe manipulation.
The propagation processing part simulates accurate wide-angle propagation from the detector plane to the object plane, preserving the 3D geometry of recorded objects. Achieving large FOV requires a short focal length, resulting in Petzval curvature. This aberration is compensated numerically by a two-step solution. The first step is a calibration procedure in which the center of the Petzval curvature is found. Meanwhile, the second step is a numerical algorithm for field curvature compensation based on bilinear approximation. The Petzval curvature compensation enables us to simulate the field propagation between the detector and the Fourier plane of the system. The next part of the reconstruction algorithm, propagation to the objects, is implemented using the angular spectrum compact space-bandwidth product (AS-CSW) method [29,30]. The principle of the system operation is confirmed by showing numerical reconstructions of recorded holograms with a large FOV. In this experiment, a scene consisting of many objects placed at different depths and with size exceeding the FOV of the system is holographically recorded.
In Section 2, the optical scheme of the WADH system is described. Section 3 explains the hologram formation process, where the implication of using a short-focal lens is discussed. Next, Section 4 introduces the numerical hologram reconstruction tool. Subsequently, Section 5 discusses the process of calibrating the system. Section 6 is the experimental part, in which reconstructions of the recorded holograms are presented, confirming the novel properties of the WADH system. Section 7 summarizes the major developments of this work.
2. OPTICAL SETUP OF WADH
The scheme of the WADH system is presented in Fig. 1. The laser beam of wavelength is polarized elliptically with the quarter-wave plate (QWP) and half-wave plate (HWP). Then, the beam is unevenly divided into reference and object waves on a polarizing beam splitter cube (PBSC). The reference beam passes through a linear polarizer (LP), which is used to attenuate reference beam intensity. The beam splitter cube (BSC) with mirror M1 attached to the piezo-electric actuator (PZT) redirects the beam simultaneously, adding the ability to use the phase-shifting method. Further, the redirected reference beam is extended with a microscope objective (MO) and a pinhole (PH), and then collimated with the collimating lens (CL). Finally, the reference wave is incident on a CMOS sensor (JAI SP-20000M-USB, resolution pixels, pixel pitch ) used for hologram recording, after being reflected on the first surface of the beam splitting plate (BSP). The object beam is divided into two illumination paths where the uniform illumination of a scene is achieved with the engineered diffusion plates (D), which gives a fine speckle pattern. The used diffusion plates are Thorlabs ED1-S50, which provides the square-shaped illumination at a 50° diffusion angle with a top hat intensity profile. To have a high-quality hologram, it is important to place the object within the area of quasi-constant illumination. A part of the light scattered on the surface of the registered object propagates toward the rectangular spatial filter (RF). This spatial filter is placed in the focal plane of the lens L (Thorlabs AC508-075-A-ML, , ). The object beam passes through L to its back focal plane, where there is a CMOS sensor. In this plane, the object beam is interfering with the reference plane wave, creating the hologram of the recorded scene.
This section gives details about the registration process and describes the features of the captured hologram. The registration process is illustrated with the conceptional diagram showing important parameters of the object and reference waves, which is presented in Fig. 2. Depending on the selected type of holographic registration mode, the reference wave is an on-axis or off-axis plane wave. The direction of propagation of the reference beam is set by a beam splitter plate, shown in Fig. 2, with a semitransparent green area. The formation process of the object beam is more complex. The object wave travels from the object (plane ) to the RF (plane ), where it is spatially limited. The size of the RF is set to be equal to the dimensions of the object beam at . Then, in sequence, after focusing by the lens L, it arrives at the CMOS plane . Consequently, the process of generating an object wave can be divided into two main stages that determine the properties of the captured hologram. The first stage is a spatial limitation of wide-angle object wave, while the second is FT.
Figure 2.Concept of hologram registration in the WADH setup.
The hologram, recorded at the plane , is an interference pattern of the reference and object wave, and its distribution can be written as where stands for conjugation operation. An example of a recorded hologram is shown in Fig. 3, where it can be seen that resembles the distribution of a defocused object; whereas the accompanying magnification illustrates the fine structure of the holographic fringes coding the object’s shape.
Figure 3.(a) Exemplary hologram and (b) zoom of the fringe pattern.
The captured hologram is a discrete signal of size and sampling interval Δ, which are also the pixel number and pixel pitch of the camera, respectively. Figure 2 shows that the object waves from the planes and are Fourier transform related. Thus, the sampling interval of the object wave at filter plane equals and . Consequently, the spatial size of the object wave at the filter plane can be maximally and . Thus, at this plane, there is filter RF determining spatial dimensions of . The selection of the filter size is discussed later in this section. The sampling period at the filter plane determines the maximal registration angle of an object as and . Thus, the field angle of the WADH system is
This relationship defines one of the geometrical properties of the WADH technique, which is the size of the recorded image. The field angle is proportional to the focal length and the sensor area. Thus, it can be enlarged using a short focal length or a large area sensor. As a result, the larger scenes can be recorded, where the transverse size of the registered volume at the distance is . The second geometrical property of the WADH technique is a reconstructed image resolution. For this, the on-axis object point generating a spherical wave is illustrated by purple color in Fig. 2. As shown, the NA of this spherical wave, which is registered in the hologram, determines the resolution of the image. Since the maximum filter size is , the maximum resolution is equal to , , respectively.
In the WADH setup, holograms can be registered in two modes: (i) phase-shifting and (ii) off-axis, which offers full and reduced bandwidth, respectively. In the first mode, the five-frame phase-shifting technique [31] and the on-axis reference wave are applied. On the other hand, the FT method is used for the off-axis holograms. The FT method requires fringes with the carrier frequency, which are introduced by setting the off-axis angle of the reference wave. For these two modes, different size of the RF, as shown in Fig. 4, should be employed. These sizes are selected to cover the bandwidths of the object beams, which can be recorded in phase-shifting or off-axis modes. Consequently, the captured holograms have different resolutions. The bold notation is used for marking the vector parameters, i.e., . In Fig. 4, the green-dashed box marks the maximum size of the object wave, while white rectangles illustrate the filters used when employing both capture modes. Registering a hologram using the first mode, the phase-shifting method, enables utilization of the entire Fourier space in since and . As a result, the resolution of the recorded object is maximized and equal in both directions. An example of a Fourier spectrum of a hologram recorded with the phase-shifting method is presented in Fig. 4(a). For the second mode, holograms can be registered as a single frame. However, this approach needs an allocated frequency space to separate the from the conjugated term and the zero order. This requires a reduction of the size of the spatial filter and the introduction of the carrier frequency. The required carrier has spatial frequency and is introduced by a slight rotation of the beam splitter plate. The separation of the unwanted terms requires reducing the spatial filter size in the direction to 1/4 of the . Thus, as shown in Fig. 4(b), the corresponding dimensions are and . The consequence of this smaller filter is a four-fold reduced bandwidth of the hologram for the direction for the single-shot mode. In the WADH system the object wave travels from the object scene to the RF, which can be described by using the angular spectrum as where is the rectangular function, denotes the Fourier transform of the object field , is the transverse frequency vector in plane , and is the corresponding longitudinal frequency of the angular spectrum transfer function given as .
Figure 4.Filter size, its position, and the exemplary Fourier spectrum of recorded hologram in (a) phase-shifting and (b) off-axis mode.
The WADH system works in a wide FOV. Therefore, lens L does not focus plane waves of different angles into the planar surface but into the curved one. Such a surface is known as field curvature, one of the primary aberrations of wide-angle systems. Figure 2 shows the generated field curvature using the black dashed line and red line. The black dashed line shows the generated field curvature, while the red line shows its generation for one of the selected plane waves propagating at an angle from the filter plane. Here, we denote the field curvature as a distance that depends on . This distance is a longitudinal location where a plane wave of a particular angle focuses to minimum spot size. In consequence, FT of the field is obtained not on the plane but on the curvature defined by Thus, the object waves and are related by the integral
Equation (4) differs from the FT integral. There is an additional quadratic spherical phase component, which depends on the and represents the field curvature. The distribution of should be known for hologram reconstruction. In this regard, the optical design software OSLO was used. For the calculations, the lens model was made, and for the set of selected five field points, the was found. The method involves localizing the position of the smallest focusing spot for input parallel rays coming out of a selected angle. The field points were chosen in relation to the CMOS size, which is 32.8 mm long side, and the points were located at 0, 8.2, 16.4, 24.6, and 32.8 mm. For these values, the corresponding field curvature is 75, 74.59, 73.28, 71.43, and 68.99 mm. To find the distribution of for any arbitrary points, the fourth-order polynomial approximation was used. It was found that the error of the approximation is negligible.
4. HOLOGRAM RECONSTRUCTION ALGORITHM
The numerical reconstruction of captured hologram in the WADH system is a challenging task. The reasons are the wide angle, long propagation distance, and large field curvature. The reconstruction of wide-angle holograms along long distances with classical propagation algorithms, such as Rayleigh–Sommerfeld integration (RS) [32] and angular spectrum (AS) [33], requires huge quantities of computational resources, which are often unavailable. This section explains the process of the hologram reconstruction algorithm, which includes fringe processing and numerical propagation with field curvature compensation.
The first element of the hologram reconstruction is a fringe processing part, which decodes the object wave from the hologram . That process differs for off-axis and phase-shifting registration modes. For the phase-shifting mode, the object’s phase and amplitude are determined using a phase-shifting technique, and the wavefield is recovered at full bandwidth. On the other hand, for off-axis holograms, it is required to filter out the part of the spectrum corresponding to the first-order term and successively remove the carrier frequency by shifting the resulting signal to the spectrum center. As a result, in both modes, the object wave is obtained.
The second element is a wavefield propagation made of two steps. The first step is a transfer of wavefield from the hologram to the RF plane, and the second step is the numerical propagation of wavefield from the RF plane to the object plane. According to Fig. 2, the hologram and RF planes, labeled and , respectively, are in the focal planes of lens L. Despite this, the transfer of the wavefield cannot be performed directly by FT, as explained in Section 3, due to the presence of Petzval curvature. The value of the field curvature is field dependent, which is expressed by Eq. (4) with the spherical phase component proportional to field curvature . The developed compensation method of field curvature is based on the inversion of Eq. (4). Unfortunately, its direct implementation would require calculating integrals in the form of FT for each hologram’s pixel, making this approach computationally too expensive. Here, to reduce computational effort, we use the circular symmetry of the field curvature and assume that the field curvature is locally constant. Accordingly, the distribution of the field curvature is divided into regions, where it is constant and takes the corresponding value of . This approximation enables us to develop the wave propagation algorithm from plane to , which includes field curvature compensation, presented in the diagram in Fig. 5(a). The algorithm involves the evaluation of FFTs, and it can be expressed as where is the filtering window and is the approximated field curvature of the th region. The choice of the appropriate window filters is crucial to the algorithm’s computation time and the quality of the field curvature compensation. The use of the narrow window size guarantees high accuracy but implies higher computational load since more FTs need to be computed. On the other hand, the use of wide window filters can decrease the quality of hologram reconstruction. In the experiment, we use filters in the form of donut-shaped windows to avoid the hard cut between two neighboring regions. Each window filter comprises the constant and the overlap part, as shown in Fig. 5(b). The overlap parts of the two neighboring filters occupy common space, whose profiles have to fulfill Rayleigh’s energy theorem. Thus, the sum of the squares of filters has to be constant over the entire hologram .
Figure 5.(a) Scheme of piecewise field curvature compensation process. (b) Cross section of window filters .
In the calculation process, as shown in Fig. 5(a), for each , the field is multiplied with donut filter , and, next, FFT is applied. The obtained result is multiplied by the spherical phase component canceling field curvature . All of the calculated results are summed up and compose the object field .
The result obtained in the first step of the wavefield transfer algorithm, or , can be directly used as content for a near-eye display of wide FOV. Meanwhile, in the second and final step of the algorithm, that is, the numerical propagation from the to plane, a holographic reconstruction of the recorded object is determined. Propagation of the wavefield in a wide angle and for a large distance with the classical propagation algorithms, RS and AS, can be problematic since those methods preserve the sample size. This means the input signal has to be enlarged to the size of the image space before propagation. That causes a need for a huge quantity of computational resources, which are not available for most PC units. To show the necessary increase of the computational load to propagate the wavefield, a numerical measure called space-bandwidth product (SW) can be used. The SW, in general, is equal to , where and are spatial and frequency supports of a signal, respectively. In the investigated case, for a wavefield of the size of pixels, where pixel size , , , and its . However, as mentioned before, employing the classical AS or RS requires enlarging the to the size of the image space. Then, for a distance , it is , while , since the sample size is preserved. Ergo, SW grows to , which is times larger than the of the original input signal.
To enable numerical reconstruction, it is necessary to perform calculations using reduced SW. In this regard, we employ the propagation tool called AS-CSW [29,30], which reduces the required size of the space-bandwidth product for wavefield calculations. It is done by introducing a compressed representation of the optical signals , where the subscript is used in this section for marking compressed CSW signal. With the AS-CSW algorithm, the calculation of output field in image plane is expressed as where is the representation of a virtual paraxial spherical wave and takes the form of
The factor from Eq. (7) is the compressed representation of AS propagation kernel, which is equal to
Once the compressed representation of is known, the field can be found directly from
The calculated result is sampled with the large pixel pitch which is the advantage of the algorithm because ; for example, for , pixel enlargement factor . The above method described by Eqs. (6)–(10) does not consider SW extension, which is necessary for aliasing-free computations. This reduces the computational efficiency of the algorithm. However, the required SW expansion by AS-CSW is much smaller compared to the padding necessary in the classical AS/RS methods, making AS-CSW a practical tool. The needed SW expansion can be calculated when analyzing the transfer of generalized SW [34] through the AS-CSW algorithm. For the developed here propagation algorithm, we consider the case of the space (by the RF) and frequency (by the size of the CMOS) limitation of input wave . For the sake of clarity, the one-dimensional discussion is outlined here. For these conditions, a proper has to be found, which considers the SW extension resulting from the algorithm calculations. The first considered element, which has to be analyzed, is a multiplication by the kernel . The needed spatial extension can be calculated with where is the spatial frequency localization [35] and is calculated as . Then, the necessary extension of frequency space is given as where
The new sizes of and depend on the value of . The calculations of , , and are very fast, and the optimal can be found by scanning values in a neighbor of the distance. For our setup parameters and , the and are equal to 18,530 μm and , respectively. This means that the required , which is times larger than the SW of the original input signal. When compared to the padding required by AS/RS, which is , it is a small number.
5. CALIBRATION OF HOLOGRAPHIC CAPTURE SETUP
The WADH system enables undistorted recording and reconstruction of large 3D scenes. It requires careful alignment of system components. From this point of view, the most important elements are the CMOS camera and the RF. The RF is a small axial element, so its position can be adjusted with an autocollimator. In contrast, setting the correct position of the CMOS sensor is more complicated. As can be seen in the schematic diagram (Fig. 2), the sensor captures a spherical wave whose focus is close to its axial position. Therefore, a small axial displacement of the camera causes a large relative registration error of the spherical wave, translating into large changes in the position of the reconstructed point. It is also shown in Fig. 6. In addition, for the part of the scene close to the optical axis, satisfying the Fresnel approximation, the obtained deformations can be determined [36], while for larger angles, the reconstruction errors are difficult to predict. Therefore, in this study, a calibration algorithm was developed that allows precise positioning of the CMOS camera.
Figure 6.Illustration of calibration process geometry: registration geometry (black lines) with axially misaligned camera and reconstruction geometry (red lines) assuming the camera is set at lens focal plane.
The large deformation of reconstruction volume is a base of a developed calibration procedure of the CMOS camera axial location. Figure 6 presents the case of a misaligned camera for hologram registration and reconstruction of single on-axis point object O at a known distance away from RF. There, the hologram registration process is marked with black rays, while the reconstruction geometry is marked with red. Any shift of CMOS camera position by distance results in obtaining a sharply reconstructed point at distance , while hologram reconstruction at distance gives the defocused image of point . Respectively, for the proper camera position (), the sharp reconstruction of point is obtained at distance . Relying on that property, the calibration procedure has two steps: first, capturing a set of holograms for different camera positions and, second, reconstructing captured holograms at distance and calculating numerical focus measures. The absolute total variance of the first derivative of the image is used as a numerical measure of sharpness [37]. In the calibration procedure, instead of the point object, we employ a speckle pattern test target [38]. The proposed calibration scene requires a target standing on the optical axis of the registration setup (hologram center), at a known distance , in our case . As the test pattern that allows employing focus measures, a speckle pattern test was printed on a white paper, consisting of various resolutions. The hologram capturing was set for different camera positions along the optical axis with the step of 100 μm, covering the range of 2 mm. Once the hologram set is reconstructed, only the target part of the reconstructed image is considered for the calculation of the focus measure.
Figure 7 illustrates the results obtained during the calibration process. Figure 7(a) presents the reconstruction of the entire object and used speckle pattern in the top left corner, for which the autofocusing measure curve reaches the maximum (). Figures 7(b)–7(d) present zooms of reconstructions for [Fig. 7(b)], [Fig. 7(c)], and [Fig. 7(d)], respectively.
Figure 7.Reconstructions of (a) entire calibration scene at distance for proper camera position. Yellow dotted box corresponds to the area for zooms of reconstructed test targets for camera shifts : (b) , (c) 0 mm, and (d) .
This section presents the numerical reconstructions of holograms captured with the WADH system. Here, the complete capturing-processing path, with the developed nonparaxial reconstruction algorithm, is applied to the experimental data, showing the system advantages related to the complexity and large/FOV unlimited volume of the recorded scene. In order to be able to show these setup properties, a proper model was built. Our model presents the two Hussars roaming through a stone forest landscape. Thus, in the proposed scene, multiple objects are included, placed at different depths, which are also a part of the scene’s ground and background. Another important fact is the overall size of the test scene, which exceeds the setup FOV, according to the camera and lens parameters given in Section 2. The overall view of the built model is presented in Fig. 8, and, as can be seen, it has objects such as rocks, dry bush, moss, tree log, trees, a banner, and two Hussars figurines, each of size (), respectively. The model volume is (), with the nearest objects located 650 mm from the Fourier plane and the farthest background objects located at a distance of 1100 mm.
Figure 8.(a) Photograph of the experimental model showing the scene’s volume. (b) Photograph of the scene from the RF plane.
In the experiment, off-axis and phase-shifting holograms are investigated. In the WADH system, as in any other using a coherent light source, the reconstruction quality is reduced by speckle noise. To minimize its influence on the quality of obtained reconstruction, we employ the hologram averaging method [17] by capturing and reconstructing multiple holograms, each with different speckle patterns coming from the diffusers. The number of holograms that satisfactorily minimize speckle noise was determined in an experiment in which 100 holograms were recorded, reconstructed, and averaged. The calculated signal-to-noise ratios are 2.0, 4.6, and 6.2 for 1, 20, and 100 averaged holograms, respectively. Averaging 20 holograms was chosen to reduce speckles in the experiment.
Figures 9(a) and 9(b) present the reconstructions of single and averaged holograms captured in off-axis mode, while Figs. 9(c) and 9(d) show the reconstruction of single and averaged holograms captured in phase-shifting mode. For both averaged results, 20 holograms were reconstructed and used in the averaging method. As can be seen, the reconstructions of off-axis holograms have a deeper depth of focus, which is related to the filter size, which limits the hologram’s numerical aperture compared to the results obtained from the phase-shifting mode. Regardless of the mode used for capturing holographic data, the resulting reconstructions do not exhibit aliasing, as can be seen particularly in the case of the left Hussar, which only partially fits in the FOV of the system.
Figure 9.Numerical reconstructions of holograms, registered in two proposed modes: (a), (b) off-axis and (c), (d) phase-shifting. Panels (a) and (c) present a single hologram, while (b) and (d) present an averaged one obtained by averaging 20 hologram reconstructions. (e) Reconstructions for Petzval curvature correction with different Q.
The results shown in Figs. 9(a)–9(d) were achieved with the applied field curvature compensation for . To discuss the choice of a satisfactory level of Petzval curvature compensation, Fig. 9(e) shows a set of reconstructions of holograms without compensation and with compensation for , 8, and 15. To minimize the influence of speckle noise during visual evaluation, the presented results are from the averaged reconstruction of holograms registered with the phase-shifting method. Zooms of two elements were chosen to show the effect of Petzval curvature compensation in different parts of the image: the flag (top row) and the feathers (bottom row). The second element of the scene has a large field angle. As can be seen, for no correction and , the details of the objects are blurred, while for and , a significant improvement in the sharpness of details is observed. The reconstruction for shows more detail.
To show depth preservation and achieved quality of reconstructed scene, the zooms of details placed at different distances are presented in Fig. 10. From the image space, we choose the left horse’s head, the part of right Hussar’s ornated pauldron, and the fragment of the background’s coniferous tree, which are at distances 750 mm, 910 mm, and 1020 mm, respectively. The structural similarity index measure (SSIM) is shown for the zooms of the horse’s head, where Fig. 10(d) serves as a reference image. Thus, it can be quantified that the averaging method improves the quality. Also, in Figs. 10(e)–10(h) it can be seen that the used averaging method not only minimizes the speckle noise intensity but also decreases the size of the appearing speckles.
Figure 10.Zoomed details of single reconstructed hologram registered in (a), (e), (i) off-axis and (b), (f), (j) phase-shifting modes, and of averaged hologram reconstructions registered in (c), (g), (k) off-axis and (d), (h), (l) phase-shifting modes. Figures (a)–(d) present zooms of the reconstructed left horse’s head, (e)–(h) the part of right Hussar’s ornated pauldron, and (i)–(l) the fragment of the coniferous tree. Displayed details are at distances 750 mm, 910 mm, and 1020 mm, respectively.
This work presents a WADH setup of nonparaxial FOV of . Its unique advantage is that it is the only known system that can capture the holographic content of any deep/large scene containing elements that exceed the FOV of the system. Thus, the multiple objects with all of their surroundings, including ground and background, can serve as content. This practical advantage of the system is related to its aliasing-free feature since objects outside the FOV do not generate aliasing images and do not distort the image. The included experiment, in which a built complex model, with many objects placed in a large volume that exceeds the FOV of the device, is recorded and reconstructed, demonstrates this unique advantage.
The recorded holograms can be directly reconstructed without 3D deformation in the wide FOV of the near-eye display. Importantly, the FOV of the WADH can be modified and therefore enlarged, as it depends on the focal length and size of the sensor. Two modes of hologram registration have been developed: off-axis and phase-shifting. Off-axis features faster registration, while phase-shifting has higher resolution. Recorded holograms are reconstructed by the developed hologram reconstruction tool, with two major components: Petzval curvature compensation algorithm and nonparaxial propagation with the AS-CSW method. It is worth noting that the reconstruction process of recorded off-axis holograms can be optimized since they use a quarter of the bandwidth.
In summary, WADH significantly enriches the possible sources of holographic content for near-eye displays by showing that not only CGH algorithms can offer wide FOV hologram generation. In addition, we believe that due to the simplicity of the system and the ability to record high-volume scenes, this system will be an important source of holographic data.
[26] A. Geltrude, M. Locatelli, P. Poggi. Infrared digital holography for large object investigation. Digital Holography and Three-Dimensional Imaging, DWC13(2011).