Based on the principle of the holographic stereogram, we have published Letters to realize a full-color and real-time holographic display by means of a holographic functional screen (HFS) combined with a system formed by a camera-projector array[1–5]. In practice, it is difficult to integrate the whole system due to the calibration of each individual camera-projector; meanwhile, the high cost of adapting masses of camera-projectors makes such a system unacceptable for public consumption.
Integral photography[6] theoretically seems like an ideal approach for both the acquisition and restoration of three dimensional (3D) light fields; however, it is difficult to overcome the inherent inconsistency formed by the microlens between the sub-image quality and the resolution of the final 3D display because of diffraction effect of the lens aperture. Therefore, a satisfactory 3D display is a challenge that has yet to be overcome.
In this Letter, we propose a novel approach to realize a perfect holographic display perceived by human eyes. It is equivalent to the setup in Refs. [1–5], while the optical axis of each individual camera-projector is parallel to each other, i.e., anchored at an infinitely far point. It could be thought of as a successive technical innovation derived from our proposed physical concepts named the hoxel and spatial spectrum, which are properly defined by the four-dimensional Fourier transform of the wave function of this nature. The purpose is for the most compact design to carry out the application at the lowest cost.
Sign up for Chinese Optics Letters TOC Get the latest issue of Advanced Photonics delivered right to you!Sign up now
There are four steps in our innovation:
1. Parallel acquisition of the spatial spectrum.
Figure 1 is the sketched map for the parallel acquisition of the spatial spectrum. is a plate of lens arrays comprised by small lenses with the same imaging parameters, which are denoted by for the aperture of each lens, for the concentric distance, and for the focal length. The viewing angle of each lens could be expressed as . As the optical axis of each individual lens is parallel to each other, the spatial spectrum ( to , to ) of a 3D object acquired by each lens inside its viewing angle Ω corresponds to what we have described before in Refs. [1–5]. The sampling angle of acquisition could be denoted as , and is the distance between the lens plate and the object . is a light-sensitive component (such as film, CCD, or CMOS, etc.) placed near the focal plane of with a distance to the lens plate to record the spatial spectrum . is the resolution of the digital light-sensitive component corresponding to each imaging unit of the lens plate: corresponds to , and corresponds to . The corresponding hoxel is denoted as , i.e., the acquired object is constructed by hoxels, . The distance between object and the reference surface is , and the reference point is located at the center of . Field aperture is placed between and to prevent the crosstalk of each . Compared with the traditional integral photography, the lens array here is not a microlens array; the aperture of each lens is big, so as to acquire enough of a distinct image of each spatial spectrum, but it is never bigger than . The focal length determines the viewing angle Ω of each individual lens. The bigger the Ω of the lens, the bigger the scope of the 3D object it can acquire. Here, we suppose that Ω is big enough to make at least one lens near the center of the lens array acquire the whole object , as shown in Fig. 1.

Figure 1.Sketched map for parallel acquisition of spatial spectrum: hoxels are imaged by small lenses to form images of the spatial spectrum.
Compared with the work we have described in Refs. [1–5], where the anchoring acquisition was adopted, except for the spatial spectrum image at the center of the lens array, which is exactly the same, other sub-images are shifted a phase factor of on the spectrum surface corresponding to the original spatial spectrum acquired by the anchoring acquisition. They are then trimmed by the field aperture to make the reference point on each sub-image of the original object overlap at the same position after imaging back to the original space. In Figs. 2 and 3, the corresponding coordinates of the reference point and its sub-image inside each spatial spectrum are respectively compared. The phase factor is the inherent character for parallel acquisition described in this Letter; it could be the accordance of each spatial spectrum shift when the 3D data is acquired by the anchoring acquisition and playing back in a parallel situation or vice visa.
![Sketched map for anchoring acquisition of spatial spectrum in Refs. [1–5" target="_self" style="display: inline;">–5]. The reference point R is at the same position inside each individual sub-image.](/Images/icon/loading.gif)
Figure 2.Sketched map for anchoring acquisition of spatial spectrum in Refs. [1–5" target="_self" style="display: inline;">–5]. The reference point is at the same position inside each individual sub-image.

Figure 3.Sketched map for parallel acquisition of spatial spectrum in this Letter. The sub-image is shifted a phase factor compared with Fig. 2.
2. Holographic coding of the spatial spectrum.
It is necessary here to create holographic coding by making use of the pixels of each spatial spectrum acquired from Fig. 1 to generate the holographic coded spatial spectrum . The details are shown in Fig. 4. We can use a computer to pick the pixel of the image to fill the inside of a certain hoxel of the object space shown in Fig. 1 to get the coded spatial spectrum of this hoxel. The significance of such holographic coding is as follows: (1) We can efficiently realize coordinate transformation between “image and spectrum” to eradicate the fatal drawback of “pseudo-scope imaging.” (2) Such a coding method is versatile and can be used in any kind of 3D display system; the holographic coded image can be directly broadcasted by the lens array, or treated as the “hogel” to print the 3D hologram dot by dot[7]. (3) By means of simply magnifying or reducing the pattern size of , the size of the hoxel could be arbitrarily changed to get a magnified or reduced display of a 3D object,. (4) According to the details of acquiring or displaying a 3D space (such as resolution, depth, and viewing angle, etc.), the maximum sampling angle could be designed for perfect 3D displays by the minimum spatial spectrum number for the most efficiency.

Figure 4.Sketched map for holographic coded spatial spectrum image of a hoxel .
3. Recovery of discrete spatial spectrum.
After a simple treatment involving magnifying or reducing, frames of holographic coded image are displayed at the corresponding positions on a flat-panel displayer , which has a resolution bigger than . Figure 5 is the sketched map for the restoration of the integral discrete spatial spectrum, where lens plate is located in front of with a distance of . is equivalent to in Fig. 1 when the hoxels are correspondingly reduced or magnified. is still comprised by small lenses with the same imaging parameters denoted by for the aperture of each lens and for the concentric distance (here is just the hoxel size of the preset in Fig. 1). Field aperture is also placed between and to prevent the crosstalk of each . Each lens on has the same viewing angle Ω as in the acquisition to avoid the deformation of the final image. As shown in Fig. 5, each coded spatial spectrum displayed on the monitor would be projected backwards as the discrete spatial spectrum of the original object used to form a 3D images where the number of the preset hoxels is changed from to which is number of the final displayed hoxels . is obtained as follows: (1) It is supposed that the pixel size of the displayer is . (2) When is imaged by a lens inside and the image size is magnified times, the corresponding hoxel size is . (3) It is supposed that the length and width of the displayer are respectively and . (4) , and . It can be seen that do not have a direct relation to , which is the eventual number of hoxels that is formed by directional projections from the original hoxel , i.e., the final hoxel resolution of the holographic display inside the display area of . Compared with the traditional integral imaging techniques, the lens here is not a microlens; otherwise, the white light speckle noise would be unacceptable. The aperture of each lens is big, so as to distinguish the features of , but it is never bigger than .

Figure 5.Sketched map for 3D reconstruction decoded by the HFS.
4. Integral reconstruction decoded by HFS.
As shown in Fig. 5, we placed a corresponding HFS, which is described in our previous work[1–5], at the position of to make the expanding angle of each discrete spatial spectrum input the same as the sampling angle shown in Fig. 1; i.e., to make each coded spatial spectrum combined together but not severely overlapped (the appearance here is a uniform bright background because the edge features of each lens are just smeared together by the HFS). This forms an integrally continuous output of the spatial spectrum. Human eyes can then observe a real holographic 3D image floating on the HFS. It should be noted that the HFS should be located at the above-mentioned place; this is the most efficient way to display a certain sampling angle . The HFS could be regarded as the standard plane straddled by the displayed 3D space with the depth determined by . When the HFS is not correctly located to make the broadcasting angle much bigger or smaller than the sampling angle, the displayed space would be lack the original 3D data, which would result in severe crosstalk or a nonlinear appearance.
In order to make our innovation more comprehensible, the following analysis was done of the imaging quality:
1. Spatial spectrum description of 3D information:
Suppose is the size of a preset hoxel in a 3D space, and is the depth of that space, then the corresponding sampling angle can be expressed as . That is to say, a 3D object constructed by individual small cubic irradiators can be completely derived by individual light tapers, in which the apex of each light taper is located inside the plane of the HFS, while the divergent angle is . The viewing angle of this 3D object is .
Here, we have , because spatial spectra are included inside the hoxel .
2. Spatial spectrum description of human vision:
Some basic parameters of human eyes are as follows: (1) pupil distance (the average distance of two eyes) , (2) pupil diameter (2–8 mm, depending on the brightness), on average, is , (3) angular resolution limitation: , and (4) viewing angle in the stationary state: . When human eyes are fixed on a certain position, human vision is able to express hoxels and needs only two spatial spectra () to form the binocular stereoscopic image. There are spatial spectra identified by human eyes, included in two hoxels and to form the objective 3D knowledge acquired by human eyes submerged into such hoxel oceans.
3. Effective acquisition and restoration:
Aiming at the spatial spectrum expression described in 1 and 2, the visible 3D space information could be fully acquired by the lens array plate shown in Fig. 1, and also could be fully restored by the lens array plate shown in Fig. 5. The detailed requirements are as follows: , , is approximately 550 nm, which is the average wavelength of visible light, , . Here, the sizes of the lens apertures ( and ) determine the size of hoxels or the cubic voxels that are acquired or restored; the concentric distances ( and ) determine the sampling angle of the space acquired or restored, and therefore determine the depth of this space . The focal lengths ( and ) determine the viewing angle Ω of this space, which behaves as the processing capability of a lens unit in the spatial spectrum information, i.e., . Because we adopt the HFS to compromise the nonlinear features of the lens array, the microlens paradox for integral photography could be completely avoided. The key is achieving a high-enough resolution of the corresponding sensor ( in Fig. 1) and displayer ( in Fig. 5) to identify and display the spatial spectrum information composed by above-mentioned individual pixels.
By making use of a commercially available 4 K flat-panel displayer KKTV LED39K60U with the resolution of , according to the above-mentioned principles, we have achieved a digital holographic display with full color and full parallax. The details of the parameters are as follows: (1) hoxel size is , (2) number of hoxels is , (3) the number of the spatial spectrum is , and (4) the viewing angle is .
Figure 6 is the sketched map of the holographic coded pattern of the spatial spectrum inside each small lens; here, the process of acquisition is replaced by directly rendering the computer-simulated 3D models. In order to fully use the limited pixels on the 4 K displayer, we aligned 3818 small lenses with the aperture diameter in a honeycomb array. Figure 7 is the picture taken from one direction before the HFS is applied. No detailed features can be identified in the picture, only discrete light rays from the hoxel . Figure 8 is the picture taken from one direction after the HFS is applied. All features are properly decoded by the HFS as the final displayed hoxel . Figure 9 shows the pictures taken from multiple directions of the holographic displayed digital 3D models formed by the coded spatial spectrum shown in Fig. 6; the smooth color restoration and the full parallax relationship of the displayed space can be distinctly seen. Figure 10 is another result of a holographic display “skull” in which each profile is clearly expressed.

Figure 6.Sketched map of holographic coded pattern of the spatial spectrum inside each small lens.

Figure 7.Sketched map for restoration before the HFS is applied.

Figure 8.Sketched map for restoration after the HFS is applied.

Figure 9.Pictures taken from multiple directions of the holographic display digital 3D models. Q1: Can you find out any differences between the 9 pictures? Q2: Can you imagine the 3D relations of each object only with the clues of such differences? It would be eye seeing in site for real 3D display.

Figure 10.Pictures taken from multiple directions of the holographic display digital 3D “skull.”
In conclusion, we demonstrate the design and experimental result of identifiable holographic display for human vision. The key is to transform the visual redundant pixels into an identifiable hoxel display. Although the available 4 K flat-panel displayer could only obtain a 2.5 mm hoxel size, the developing 8 K or even 16 K flat-panel displayer would eventually improve the final hoxel resolution for the eye-catching level if the lens aperture is bigger than the human pupil. We expect this novel device would find its first application in medical imaging, with the obvious advantage of seeing pictures in real 3D form simultaneously.