Acta Optica Sinica (Online), Volume. 1, Issue 1, 0111001(2024)

Computational Imaging Based on Focal Plane-Coded Modulation: A Review (Invited)

Bowen Wang1,2,3, Xu Zhang1,2,3, Haitao Guan1,2,3, Kunyao Liang1,2,3, Sheng Li1,2,3, Runnan Zhang1,2,3, Min Zeng1,2,3, Zihao Pei1,2,3, Qian Chen3、*, and Chao Zuo1,2,3、**
Author Affiliations
  • 1Smart Computational Imaging Laboratory, School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu , China
  • 2Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing 210019, Jiangsu , China
  • 3Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing 210094, Jiangsu , China
  • show less
    Figures & Tables(62)
    Radiation pyramid light diagram drawn by da Vinci in the 16th century[1]
    Schematic of plenoptic function, denoted as L(x,y,z,θ,ψ,λ,t), which can obtain high-dimensional light field information such as wavelength, depth, and light angle by adding color filter array and microlens array in front of the sensor
    Computational imaging processing based on focal plane coded modulation
    Classification and representation of computational imaging technology based on focal plane coding modulation
    Correspondence between image pixel resolution and pixel size (Nyquist sampling). (a) Detector's pixel pitch directly related to its minimum resolvable linewidth; (b) relationship between pixel size (Nyquist sampling) and sampling information, as the sampling frequency increases, more imaging details can be obtained; (c) detection results of long-wave infrared detection system for building targets at different imaging distances (pixel size of 38 μm, pixel resolution of 320×240, lens focal length of 50 mm)
    Single frame image super-resolution reconstruction algorithm based on image feature extraction[36-37]. (a) Structure of single frame image super-resolution neural network based on image feature extraction; (b) comparison of super-resolution reconstruction results of single frame image neural network
    Basic principle of passive micro scanning sub-pixel displacement super-resolution imaging[38]
    Principle of Google handheld super-resolution imaging technology[40]
    Micro-scanning devices. (a) MEMS micro-scanning device; (b) galvanometer; (c) piezoelectric ceramic micro-scanning device
    Schematic of infrared micro-scanning imaging system[41]. (a) Physical system diagram; (b) imaging principle diagram
    Sony sub-pixel super-resolution imaging technology[42]. (a) Principle of Sony pixel shift multi-shooting technology; (b1) super-resolution imaging results of texture detail scenes; (b2) super-resolution imaging results of architectural detail scenes
    Jitter camera system[43]. (a) Physical setup and imaging principle of jitter camera; (b1) a low resolution video image captured by the jitter camera system; (b2) video super-resolution reconstruction results based on active micro-scanning
    Two aperture coding imaging schemes developed by QinetiQ. (a) Lensless aperture coding imaging scheme[45]; (b) lens based on aperture coding imaging scheme[46]
    Imaging system based on binary optical switching devices and corresponding reconstruction results. (a) Large aperture infrared monitoring system based on aperture coding developed by QinetiQ (the role of swing mirror is to automatically switch between the blackbody radiation source and the target scene) [47]; (b1) binary optical switch component based on MOEMS and (b2) physical image of the device developed by QinetiQ[47]; (c1) low resolution infrared image and (c2) super-resolution reconstruction result achieved by the system[48]
    Near-infrared super-resolution imaging system based on CS theory developed by Rice University[49]
    Compressed encoding aperture super-resolution imaging results[36]. (a) Ground truth image; (b) uncoded low resolution image, with root-mean-square error (MSE) is 0.1011; (c) modulated low resolution image; (d) reconstruction effect of low resolution modulation image with MSE of 0.0867; (e) (f) reconstructed images after adjusting the parameters of h and p, with MSEs of 0.0897 and 0.0924, respectively
    Coded aperture super-resolution imaging scheme based on CS theory[51]. (a) Principle of coded aperture super-resolution imaging based on CS theory; (b) forward model of coded aperture super-resolution imaging based on CS theory; (c1) orignal infrared image with 240 pixel×240 pixel; (c2) encoded low resolution infrared-image; (c3) reconstructed super-resolution image; (c4) error between true value image and super-resolution image; (c5) PSNR of reconstructed image at different reconstruction magnifications
    Super-resolution imaging scheme based on time delay and encoding exposure[52]. (a) Architecture of TDI CECI system; (b) experimental setup of TDI CECI system; (c) timing synchronization scheme between encoding exposure and TDI line transmission; (d) super-resolution comparison results
    Camera array imaging scheme[53]. (a1) Synthetic image captured by four cameras at different exposure time; (a2) local detail of synthetic image captured by four cameras; (a3) image captured by Canon 20D; (a4) enlarged local detail of image captured by Canon 20D; (b1) a camera array equipped with a telephoto lens for high-resolution imaging; (b2) a camera array equipped with a wide-angle lens for high-speed video imaging; (b3) a compound eye array system equipped with an independent processor
    Sub-pixel super-resolution imaging method based on camera array[55]. (a) A four camera array super-resolution system; (b) comparison between super-resolution reconstruction results and low resolution images; (c) schematic of forward model and sub-pixel super-resolution principle for camera array imaging
    Super-resolution reconstruction results based on micro lens array[56]. (a) Schematic of relative displacement of sub images generated by each lens in the lens array; (b1)(b4)(b7) low resolution images of different scenes; (b2)(b5)(b8) super-resolution reconstruction results using the proposed method; (b3)(b6)(b9) super-resolution results using pixel rearrangement; (c) sub pixel super-resolution imaging flowchart for micro lens array compound eye camera
    Super-resolution imaging scheme based on micro lens array[57]. (a) Principle of micro lens array system; (b) low resolution images captured by micro lens array; (c) physical image of the system; (d) simulation reconstruction results of different super-resolution algorithms; (e) simulation reconstruction results of different super-resolution algorithms
    Light field sensing based on focal plane coding. (a) Four-dimensional (4D) parameterization of light field; (b) light field coded image stack obtained by encoding at the focal plane; (c) application of light field sensing in all-focus imaging and aberration correction
    Shack‒Hartmann wavefront sensor[59]. The wavefronts of different sub-apertures segmented by microlens array are reflected as spots with different intensity distributions in the image plane. The local slope of the wavefront of each sub-aperture can be calculated by accurately detecting the centroid position of the spot in each sub-aperture. The wavefront reconstruction algorithm is used to recover the whole wavefront shape
    High-resolution wavefront sensor based on SLM. (a) WISH imaging system and its fingerprint target reconstruction results[60]; (b) WISHED imaging system, which uses dual-wavelength correlation to generate synthetic wavelengths for high-precision wavefront sensing[61]
    Plenoptic camera for macroscopic imaging[64]. (a) Structure comparison between conventional cameras and plenoptic cameras 1.0 and 2.0; (b) comparison of recorded information between conventional cameras and plenoptic cameras 1.0 and 2.0, and the diagram shows the sampling of focal plane object, which is clipped when the light comes from outside the focal plane; (c) different hardware implementations of plenoptic cameras
    Meta-imaging sensor. (a) Meta-imaging sensor composed of a periodic circular pattern with a circular intensity mask, a microlens array, a piezo stage, and a conventional CMOS sensor[71]; (b) digital adaptive optical reconstruction of images containing dynamic turbulence carried out with a ground-based telescope[72]
    Light field microscope[73-74]. (a) Microscope prototype composed of a Nikon Optiphot and a custom microlens array (red circle); (b) light field images of fluorescent wax particles taken through microscope objective and microlens array
    Light field microscopy used for neuroimaging of animal brains. (a) Light field microscopic imaging structure and its experimental results[75]; (b) light field microscopy used to image zebrafish[76]
    Digital adaptive optics scanning light-field mutual iterative tomography (DAOSLIMIT) imaging system[77]. (a) Principle of scanning light field microscopy of DAOSLIMIT and the process of digital adaptive optic; (b) application of DAOSLIMIT in long-term, high-speed subcellular imaging of mice
    Dappled photography light field imaging[79]. (a) Light field camera structure, and a narrowband two-dimensional cosine mask is placed near the line scanning sensor (as shown in the bottom left corner); (b) light field camera principle, in ray space, a cosine mask at d casts a shadow on the sensor, and in the Fourier domain, the scene spectrum (left green) is convolved with the mask spectrum (middle) to produce offset spectrum blocks (right); (c) spectrum and reconstruction results of acquired image, with 81 spectrum patches corresponding to an angular resolution of 9×9
    Light field imaging based on diffuser[80]. (a) Procedure for recording and reconstructing the light field using diffuser, object light propagates to the sensor after passing through the imaging lens and diffuser, caustics encode spatial and angular information in the focal plane, and the light field containing 3D information is reconstructed by solving the linear inverse problem, so as to realize the digital refocusing and other functions; (b) light field reconstruction of two playing cards
    Light field microscopic imaging based on diffuser. (a) Fourier DiffuserScope, where the diffuser is placed in the Fourier plane of the objective (relayed by the 4f system) and the sensor is placed behind a microlens focal length, from a single 2D sensor measurement, combined with a stack of previously calibrated point spread functions, 3D object can be reconstructed by solving a sparse constrained inverse problem[81]; (b) Miniscope3D, where a 55 μm thick optimized phase mask is placed at the aperture stop (Fourier plane) of the objective, and a sparse set of calibrated point spread functions (64 per depth) can be collected by scanning 2.5 μm green fluorescent beads throughout the volume, utilizing this dataset to pre-calculate an accurate forward model of aberrations varying with the field of view, and enabling the reconstruction of 3D volumes from a single 2D measurement[82]
    Lensless imaging system[90-91]. (a) Forward model of lensless imaging with different modulation modes; (b) characteristic of point spread function (PSF) in lensless imaging, including depth-dependence and lateral-dependence, which provides depth and 2D intensity encoding for lensless imaging systems respectively; (c) 3D reconstruction by solving the inverse problem
    FlatCam[94-95]. (a) System prototype and experimental scenario; (b) measurement and (c) reconstruction results of sensor in different scenarios
    FlatScope[96]. (a) Prototype and its mask; (b) 3D volumetric reconstruction of a moving 10 μm fluorescent bead
    Lensless imaging based on FZA. (a) FZACam[98-101]; (b) single-shot FZA reconstruction algorithm[102]
    Lensless imaging of spiral PSF[103-105]. (a) Imaging system prototype; (b) spiral binary grating and its point spread function; (c) sensor measurement and reconstruction results
    DiffuserCam[90]. (a) System setup and reconstruction processes; (b) 3D reconstruction result of slanted resolution target, cropped to 640 voxel×640 voxel×50 voxel; (c) 3D reconstruction of plant leaves, cropped to 480 voxel×320 voxel×128 voxel
    On-chip wide-field fluorescence microscopy based on random microlens scattering[106]. (a) System architecture and sparse PSF calibration method; (b) fluorescent beads flow in microfluidic channels; (c) NeuroD:GCaMP6f larval zebrafish
    PhlatCam[107]. (a) Comparison of PhlatCam with conventional imaging system; (b) PSF design; (c) refocusing experimental results; (d) 3D reconstruction experimental results
    Lensless camera based on programmable mask, which can be divided into “expanding imaging function” (blue background) and “improving imaging quality” (orange background). (a) Multi-layer programmable LCD mask camera structure[109]; (b) camera structure based on CS[112]; (c) SweepCam[114]; (d) structure of coded aperture camera with mask pattern changed by rotation[115]; (e) NoRDS-CAIC[116]
    Different schemes of MSFA. (a) 2×2 Bayer MSFA[117]; (b) 3×3 MSFA; (c) 4×4 MSFA; (d) uniform MSFA; (e) random MSFA; (f) hexagonal pixel-based MSFA[128]
    Low-cost integrated hyperspectral imaging sensor[130]. (a) Composition and structure of integrated hyperspectral imaging sensor; (b) transmission response of 16 materials, average light throughput of each material, and correlation coefficients matrix between each spectral response; (c) experimental results of this hyperspectral imaging sensor
    Scheme of real-time hyperspectral imaging[131]. (a) Conceptual scheme of compressive hyperspectral sensing; (b) transmittance patterns of a coded mask at four different wavelengths; (c) structure of coded masked hyperspectral sensor
    Scheme of metasurface-based narrowband filter[133]. (a) Optical image of fabricated 100 pixel metasurface; (b) linear relationship between scaling factor and ellipse feature size confirmed by SEM images; (c) scheme of imaging system; (d)(e) reflectance images of pixelated metasurface recorded at four specific wavenumbers
    Spectral sensor based on photonic crystal filter[142]. (a) Sensor principle and device image; (b) experimental results of spectral imaging
    Spectral sensor chip based on quantum dot filter and its spectral reconstruction algorithm[145]
    DiffuserSpec[146]. (a) Schematic of conventional spectrometer and DiffuserSpec; (b) reconstruction algorithm and comparison of coding patterns at 818 nm and 828 nm; (c) reconstruction results of narrowband spectral; (d) analysis of spectral resolution
    Spectral DiffuserCam[147]. (a) Overview of the Spectral DiffuserCam imaging; (b) PSF calibration under different wavelengths; (c) reconstruction results of hyperspectral image
    Ultra-simplified computational spectrometer[148]. (a) Principle and structure of the system; (b) reconstruction results of spectral image
    THETA multi-spectral camera[150]. (a) Scheme of the system; (b) multi-spectral imaging of diffusion process of sewage in a miniature landscape of a river
    Global shutter periodic scene imaging[165]. (a) System principle; (b) reconstruction results of tool head at different rotation speeds
    Programmable pixel compressive camera (P2C2) [166]. (a) Schematic of P2C2 principle; (b) optical system diagram and actual setup; (c) comparison of reconstructed results from a video sequence showing a marble dropping into water
    Video extraction from a single encoded exposure photo using a learned over-complete dictionary[169]. (a) Prototype of the system, simulating individual pixel exposure on the sensor surface via LCoS; (b) proposed method consists of three parts, i.e., encoded exposure sampling and projection of the spatiotemporal volume onto the image, learning an over-complete dictionary from training video data, and sparsely reconstructed spatio-temporal information from a single coded image; (c) experimental reconstruction results
    Translation mask temporal multiplexing coded imaging[171]. (a) Forward sensing model for translation mask temporal multiplexing coded imaging; (b) schematic of system design; (c) reconstruction of a card in a scene with arbitrary motion using constrained least squares method with a high-pass filter
    Coded rolling shutter[167]. (a) Address generator in the CMOS image sensor is used to implement a coded rolling shutter with the desired row reset and row selection patterns for flexible spatiotemporal sampling; (b) interlaced readout for high-speed photography
    High-speed video from a single rolling shutter image captured by a lensless computational camera[172]. (a) Algorithm principle; (b) image formed from a temporally varying scene, where two point sources (one yellow and one blue) flash at unique y position and at time t0 and t1; (c) experimental video reconstructed from a single captured image with exposure time of 660 µs
    Compressed ultrafast photography[164]. (a) Schematic of the system; (b) representative time frames showing the laser pulse being reflected by a mirror in the air, refracted at the air-resin interface, and the race between two laser pulses
    Multi-encoding CUP and corresponding imaging results[175]. (a) Schematic of data acquisition, where t represents time, Ck denotes spatial encoding operators, S is the temporal shearing operator, and T is the spatiotemporal integration operator; (b) schematic of the system; (c) experimental results of spatially modulated laser pulses under different encoding numbers
    Compressed ultrafast photography with frame rate of 1013 frame/s [177]. (a) Schematic of the system. (b) T-CUP performs real-time imaging of fundamental optical phenomena of laser pulse scanning, spatial focusing, reflection, and splitting
    Compressed ultrafast spectral photography[179]. (a) Schematic of active CUSP system for 70 Tframe/s imaging; (b) CUSP imaging of ultrafast linear optical phenomena
    Tools

    Get Citation

    Copy Citation Text

    Bowen Wang, Xu Zhang, Haitao Guan, Kunyao Liang, Sheng Li, Runnan Zhang, Min Zeng, Zihao Pei, Qian Chen, Chao Zuo. Computational Imaging Based on Focal Plane-Coded Modulation: A Review (Invited)[J]. Acta Optica Sinica (Online), 2024, 1(1): 0111001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Research Articles

    Received: Sep. 13, 2024

    Accepted: Sep. 26, 2024

    Published Online: Oct. 13, 2024

    The Author Email: Chen Qian (zuochao@njust.edu.cn), Zuo Chao (chenqian@njust.edu.cn)

    DOI:10.3788/AOSOL240452

    CSTR:32394.14.AOSOL240452

    Topics