Infrared and Laser Engineering, Volume. 53, Issue 9, 20240347(2024)

Light field representation and its resolution improvement techniques: an overview (invited)

Runnan ZHANG1,2,3, Ning ZHOU1,2,3, Zihao ZHOU1,2,3, Heheng DU1,2,3, Qian CHEN2, and Chao ZUO1,2,3
Author Affiliations
  • 1Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
  • 2Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing 210094, China
  • 3Smart Computational Imaging Research Institute (SCIRI), Nanjing 210019, China
  • show less
    Figures & Tables(34)
    Schematic diagram of the plenoptic function and its simplified light field definition. (a) The 7D plenoptic function; (b) Radiance \begin{document}$ L $\end{document} along a ray can be considered as the amount of light traveling along all possible straight lines through a tube, the size of which is determined by its solid angle and cross-sectional area; (c) Parameterizing a ray with coordinates \begin{document}$ \left( {x,y,z} \right) $\end{document} and angles \begin{document}$ \left( {\theta ,\varphi } \right) $\end{document}; (d) If there are no occlusions, the radiance along a ray remains constant, leading to redundancy in the plenoptic function
    Light field, WDF and ALF. (a) The phase of a wavefront is related to the angle of corresponding rays. The WDF of the spherical wave at a given z in Wigner coordinate space is similar to the LF in the position-angle coordinate space. Propagation angles of rays are encoded in the local spatial frequency; (b) The WDF and ALF support the representation of diffraction and interference phenomena by introducing a virtual negative projector[46]
    Alternative parameterizations of the 4D light field, which represents the flow of light through an empty region of 3D space. (a) Points on a plane or curved surface and directions leaving each point; (b) Pairs of points on the surface of a sphere; (c) Pairs of points on two planes in general (meaning any) position
    Traditional camera and light field camera's sampling models. (a) The cone of rays summed to produce one pixel in a photograph; (b) Sampling of a photograph’s light field provided by a plenoptic camera
    Three light field visualization method. (a) The raw image read by the sensor behind the microlens array; (b) The sub-aperture image; (c) The EPI image
    Light field rendering[1]. (a) The object-movie function of QuickTime VR enables users to virtually navigate around an object (represented by a blue shape) by swiftly flipping through closely spaced photographs of it (indicated by red dots); (b) If the photographs are taken at intervals close enough, users can reorder the pixels to generate novel perspective views without the need to physically occupy those positions (denoted by a yellow dot); this process is known as light field rendering; (c) A light field can be conceptualized as a two-dimensional assembly of two-dimensional images, each captured from a distinct vantage point
    Light field acquisition and rendering systems. (a) The light field gantry built by Stanford University in 1996 can achieve the acquisition of static light fields, and high-quality static light field data can be collected through full freedom control of objects, cameras, and lighting; (b) The multi-camera and multi-lighting dome from Tsinghua University [59]; (c) 360° light field rendering device from University of Southern California[60]; (d) Google record immersive light field video using 46 action sports cameras mounted to an acrylic dome[61]
    Fourier slice theorem[67]. (a) Classical Fourier slice theorem; (b) Generalized Fourier slice theorem
    Fourier slice photograph theorem[67]. Transform relationships between the 4D light field \begin{document}$ {L_F} $\end{document}, a lens-formed 2D photograph \begin{document}$ {E_{\alpha \cdot F}} $\end{document}, and their respective Fourier spectra, \begin{document}$ {\mathfrak{L}_F} $\end{document} and \begin{document}$ {\mathfrak{E}_{\alpha \cdot F}} $\end{document}
    Filtered light field photography theorem. Transform relationships between a 4D light field \begin{document}$ {L_F} $\end{document}, a filtered version of the light field \begin{document}$ {\overline L _F} $\end{document}, and photographs \begin{document}$ {E_{\alpha \cdot F}} $\end{document} and \begin{document}$ {\overline E _{\alpha \cdot F}} $\end{document}[67]
    Defocus kernels in the 4D light field space. (a) Layout of the 4D lens spectrum, highlighting the focal manifold[70]; (b) 4D frequency hypercone for volumetric focus[71]
    Video-rate light-field microscopy imaging technology based on a microlens array. (a) Conventional microscopy and light field microscopy structure[72]; (b) Functional imaging of neuronal activity in the entire Caenorhabditis elegans and zebrafish larval brain[74]; (c) Functional imaging of neural activity during visually evoked and predatory behaviors in larval zebrafish[76]; (d) Video-rate volumetric Ca2+ imaging to 380-μm depth in mouse cortex[77]; (e) Characterization of light-field flow cytometry using fluorescent microspheres[78]
    Light field imaging with a phase diffuser. (a) DiffuserCam: Pipeline for recording and reconstructing light fields with phase plates (a diffuser). The object light passes through an imaging lens and the phase plate, then propagates to the sensor, where caustics encode spatial and angular information. A linear inverse problem is solved to reconstruct the light field, which contains 3D information, enabling digital refocus, among other benefits[80]; (b) Fourier DiffuserScope: A diffuser or microlens array is placed in the Fourier plane of the objective (relayed by a 4f system) and a sensor is placed after one microlens focal length [82]; (c) MiniScope3D: the Miniscope's tube lens is removed and a thick optimized phase mask at the aperture stop (Fourier plane) of the objective lens[83]
    The modulation and demodulation process of the light field in the Fourier domain for a heterodyne light field camera[84]
    Different configuration of camera array. (a) Light field video camera[87]; (b) Camera array structure designed by YANG[88]; (c) The large-scale camera array designed by WILBURN and JOSHI [90]; (d) Rendering high-quality dynamic scenes with eight cameras[58]; (e) Lytro’s latest VR light-field camera Immerge 2.0
    High-speed video sequence capture using camera arrays. (a) Slicing the spatiotemporal volume to correct rolling shutter distortion and alignment of rolling shutter images in the spatiotemporal volume; (b) Overlapped exposures with temporal superresolution; (c) 1 560 frame/s video of a popping balloon, corrected to eliminate rolling shutter distortions[91]; (d)-(i) Hybrid synthetic aperture photography for combining high depth of field and low motion blur[90]. Images of a scene captured simultaneously by three different arrays: (d) A single camera with a long exposure time \begin{document}$ {I_a} $\end{document}; (e) A large synthetic aperture with a short exposure time \begin{document}$ {I_b} $\end{document}; (f) A large synthetic aperture with a long exposure time \begin{document}$ {I_c} $\end{document}; (g) Image obtained by computation (\begin{document}$ {I_a} + {I_b} - {I_c} $\end{document}); (h) Image with aliasing removed; (i) Image taken by a camera with a small aperture and a short exposure time
    Miniaturized light field imaging system. (a) Portable light field lenses designed by Adobe Systems Inc.[93]; (b) PiCam: ultra-thin monolithic camera array[94]
    Light field superresolution. (a) Schematic of a 2D section of a light field camera; (b) Top row: One view from our LF image, detail of corresponding LF image and detail of central view (one pixel per microlens, as in a traditional rendering). Bottom row: Estimated depth map (scale in m), above LF image rearranged as views, superresolved central view
    Light field deconvolution imaging based on wave optics theory. (a) Light field deconvolution based on the wave optics model and experimental results of volumetric imaging of pollen grains[73]; (b) In vivo volumetric calcium imaging of a larval zebrafish with the addition of a cubic phase mask[97]; (c) Dynamic volumetric imaging results of COS-7 cells by altering the optical path structure between the microlens array and the sensor[98]; (d) Optical path structure of Fourier light field microscopy and the imaging results[99]; (e) Principle of phase-space deconvolution and three-dimensional volumetric imaging results of Caenorhabditis elegans[100]
    Light field imaging based on prior knowledge constraint. (a) The structure of a compressive light field camera and the use of light field atoms as the fundamental building blocks of natural light fields to sparsely reconstruct a 4D light field from optimized 2D projection[108]; (b) The principle of compressive light-field microscopy and extracting light-field signatures and 3D positions of individual neural structures[109]; (c) The principle of sparse decomposition light field microscopy and whole-brain imaging of larval zebrafish[110]
    Programmable aperture light field imaging. (a) Programmable aperture photography[113]; (b) Programmable aperture microscopy and multi-modal imaging[115]; (c) Multiplexed phase-space imaging for 3D fluorescence microscopy [116]; (d) 3D OTFs under different programmable aperture [117]; (e) Programmable aperture light field microscopy[118]
    Scanning light field superresolution imaging. (a) Principle of DAOSLIMIT and the migration process observed during neutrophil migration in the liver of mice[128]; (b) Principle of the integrated meta-imaging sensor and multisite DAO against dynamic turbulence for ground-based telescopes[52]
    Hybrid high-resolution light field imaging. (a) Hybrid plenoptic camera/traditional camera high-resolution light field imaging[136]; (b) Hybrid high-resolution CCD/Shack-Hartmann high-resolution light field microscopy imaging[139]
    Confocal light field microscopy[140-141]. (a) Design and characterization of confocal LFM; (b) Tracking and imaging whole-brain neural activity during larval zebrafish’s prey capture behavior; (c) Diagram of csLFM system; (d) The raw measurements on a thick brain slice after pixel realignment for the comparison among sLFM, cLFM and csLFM
    Principle of light field reconstruction with back projection. (a) Relationship between the captured image and the light ray field, and light ray field reconstruction from captured images using back projection[142]; (b) An example of the reconstructed EPIs of a real 3D scene[143]; (c) Iterative light field reconstruction based on SART method[144]
    Light field representation of a slowly varying object under spatially stationary illumination[148]
    Angular superresolution diagram. (a) Disparity refinement[153]. After angular superresolution, one can observe the high quality and accurate occlusion boundaries of the resulting view interpolation; (b) Iteratively performing disparity estimation and view synthesis in the phase domain, we reconstructed a densely sampled four-dimensional light field from a micro-baseline stereo pair[154]
    Sparsity in the discrete vs. continuous Fourier domain, and our reconstruction results[156]. (a) The discrete Fourier transform (top) of a particular 2D angular slice of the crystal ball’s light field, and its reconstructed continuous version (bottom); (b) A grid showing the original images from the Stanford light field archive. The used images are highlighted; (c) and (d) two examples of reconstructed viewpoints showing successful reconstruction of this highly non-Lambertian scene. The uv locations of (c) and (d) are shown as blue and green boxes in (b)
    Epipolar-plane image formation and its frequency domain properties[157]. (a) Frequency domain structure of an EPI being insufficiently sampled over t-axis, the overlapping regions represent aliasing; (b) Desirable frequency domain separation based on depth layering; (c) Frequency domain separation based on dyadic scaling; (d) Composite directional and scaling based frequency domain separation for EPI sparse representation
    Compact light-field imaging device. (a) Head-mounted miniature light field microscope (MiniLFM). Explosion (left) and section (right) diagrams of MiniLFM. Some parts have been rendered transparently for visual clarity; (b) Photo of an adult mouse with a head-mounted MiniLFM [167]; (c) The CM2 combines an MLA optics and light-emitting diode (LED) array excitation in a compact and lightweight platform[168]
    Metalens array for light field imaging. (a) Schematic diagram of light-field imaging with metalens array and rendered images[171]; (b) Schematic of integral imaging based on achromatic metalenses[172]; (c) Schematic of the transversely dispersive metalens. Image of a letter “4” by the metalens with a white light illumination with a transmission window of 450–650 nm[173]
    Light field imaging in computational photography. (a1)-(a2) Light field refocusing and (a3) extended depth-of field technique[39]; High Dynamic Range panoramic videography, (b1) all cameras are set to the same exposure level, which allows for the observation of saturated areas in sunlight and dark regions in the shade, (b2) individual exposure settings for each camera to produce a high dynamic range image[90]; (c) Synthetic Aperture Imaging, (c1) a sample image from a single camera, (c2) synthetic aperture focusing on the plane where the people are located, computed by aligning and averaging images from all cameras as described in the text, (c3) suppressing contributions from static pixels in each camera results in a more vivid view of the scene behind the occluder[90]
    The AWARE-2 camera array and its imaging results[175]
    Principles and devices of light field display. (a) The principle of light field capture and display[179]; (b) Eyeglasses-free display technology[180]; (c) Near-eye light field display technology[181]
    Tools

    Get Citation

    Copy Citation Text

    Runnan ZHANG, Ning ZHOU, Zihao ZHOU, Heheng DU, Qian CHEN, Chao ZUO. Light field representation and its resolution improvement techniques: an overview (invited)[J]. Infrared and Laser Engineering, 2024, 53(9): 20240347

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Special issue—Computational optical imaging and application Ⅱ

    Received: Jun. 4, 2024

    Accepted: --

    Published Online: Oct. 22, 2024

    The Author Email:

    DOI:10.3788/IRLA20240347

    Topics