Advanced Imaging, Volume. 1, Issue 1, 012001(2024)

Future-proof imaging: computational imaging

Jinpeng Liu1,2、†, Yi Feng1, Yuzhi Wang1, Juncheng Liu1, Feiyan Zhou1, Wenguang Xiang1, Yuhan Zhang1, Haodong Yang1, Chang Cai1, Fei Liu1,2、*, and Xiaopeng Shao3、*
Author Affiliations
  • 1School of Optoelectronic Engineering, Xidian University, Xi’an, China
  • 2Xi’an Key Laboratory of Computational Imaging, Xi’an, China
  • 3Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an, China
  • show less
    Figures & Tables(68)
    Development of optical imaging.
    Light field projection.
    Relationship between light field and imaging link.
    Computational light source. (a) Light vector modulation: ptychographic iterative engine[59] and Fourier ptychographic microscopy[73]. (b) Phase modulation: structured-light 3D imaging[110] and structured illumination microscopy[111,112]. (c) Coherent imaging: optical coherence tomography[103,329] and holography[109]. (d) Time modulation: coded exposure[330] and time of flight[331]. (e) Wavelength modulation: stochastic optical reconstruction microscopy[48] and synthetic wavelength holography[57].
    Coding methods and experimental results of structured illumination 3D-imaging. (a) Example of the pattern sequence that combines gray code and phase-shift projection[11]. (b) Novel phase-coding method for absolute phase retrieval[12]. (b1) The sinusoidal fringe pattern and the wrapped phase obtained from it. (b2) Phase-coding fringe and the codewords extracted from it. (c) Comparison of projection results between the method based on the phase-coding and the traditional phase-shifted method[12]. (c1)–(c3) Three sinusoidal phase-shifted fringe images. (c4) Wrapped phase map. (c5)–(c7) Three phase encoded fringe patterns. (c8) Wrapped stair phase map. (d) The phase-measuring profilometry based on the composite color-coding method[15]. (d1) Schematic of the feature points mapping-based principle. (d2) 3D shape of a stair model. (d3) Experimental result.
    Common SIM scheme and experimental results. (a) Schematic of the four-beam experimental setup[34]. (b) Simulated imaging performance on a fibrous ground truth test image, shown as an x–z slice[34]. (b1) Ground truth. (b2) Three-beam SIM. (b3) I5S (dual-objective six-beam SIM + interferometric detection). (b4) Dual-objective four-beam SIM (without interferometric detection). (c) Key steps in implementing instant structured illumination[39]. (c1) A converging microlens array is used to produce a multifocal excitation. (c2) Out-of-focus fluorescence is rejected with a pinhole array that is matched to the microlens array. (c3) A twofold local contraction of each pinhole fluorescence emission is achieved with the aid of a second, matched microlens array. (c4) A galvo serves to raster multifocal excitation and sum multifocal emission, producing a super-resolution image during each camera exposure. (d) Comparison between traditional SIM and cSIM[40]. (d1) Conventional SIM relies on a high-NA objective lens for both excitation and collection. (d2) cSIM harnesses interference in a waveguide to excite the specimen via evanescent fields, decoupling the excitation and collection light paths.
    Scheme of STEDD microscopy[43]. (a) Sketch of the STEDD, including the sequence of excitation and depletion pulses. (b) Detailed temporal sequence of fluorescence excitation. Shortly after the excitation pulse, the first STED1 pulse (intensity profile visualized in the x–z plane) depletes the majority of excited fluorophores except for those near the center. A fraction of fluorophores in peripheral regions of the observation volume still escape depletion or are re-excited by the STED beam. The second weaker STED2 pulse (intensity profile also visualized in the x–z plane) depletes excited fluorophores near the center but leaves those in the periphery unaffected. (c) Combined confocal and STEDD image of a COS-7 cell expressing the mGarnet–RITA fusion protein as a microtubule marker.
    PSF-based multicolor STORM and deep learning-based STORM. (a)–(c) Multicolor STORM[47]. (a) Raw data from the recorded super-resolution imaging movie. Insets: two enlarged example PSFs of a green label (horizontally elongated, top) and a red label (vertically elongated, bottom) with arrows indicating the elongation direction. (b) Super-resolution image obtained by localizing each emitter in the movie and assigning its color (red, microtubules; green, mitochondria). Inset: diffraction-limited data. (c) Histogram of all of the localizations within the dotted white box surrounding an ∼2 μm-long microtubule section in (b) (dark gray, FWHM=53 nm) and the diffraction-limited intensity cross-section from the same region (light gray, FWHM=329 nm). (d) FD-DeepLoc inference process[53].
    Schematics and experimental result synthetic wavelength holography (SWH) for NLoS imaging through scattering media[57]. (a) SWH image formation and reconstruction. The synthetic wavelength Λ=λ1λ2|λ1−λ2| in the reconstruction process. (b) Experimental result. (b1)–(b4) Reconstructions of measurements taken through the ground glass diffuser for different SWLs Λ. (b5)–(b8) Reconstructions of measurements taken through the milky plastic plate for different SWLs Λ.
    Multi-angle illumination lensless imaging and mask-modulated lensless imaging. (a)–(d) Multi-angle illumination lensless imaging[59]. (a) The optical setup of multi-angle illumination lensless imaging system. (b) The corresponding forward model expression. (c) The corresponding single-shot measurement. (d) Recovered results of a USAF-1951 resolution chart. (e)–(f) Mask-modulated lensless imaging[60]. (e) Forward imaging model of the mask-modulated lensless imaging. (f) Comparison of the recovered images using the USAF-1951 resolution target.
    FPM and corresponding illumination improvement strategies. (a) Iterative recovery procedure of FPM (five steps)[62]. (b) Multiplexed coded illumination for FP with an LED array microscope[66]. (Top) Four randomly chosen LEDs are turned on for each measurement. (Middle) The captured images corresponding to each LED pattern. (Bottom) Fourier coverage of the sample’s Fourier space for each of the LED patterns (drawn to scale). (c) Experimental setup of FP based on the laser illumination source[72].
    3D imaging and scattering imaging based on ToF. (a) Experimental results of range-gated laser imaging based on the time slice[75]. Terrain vehicle imaged from ranges of 1.9 km (left) and 7.2 km (right). (b) The imaging results of range-gated laser 3D imaging based on intensity correlation at different distances[76]. (c) 3D structure of the towers derived from the polarization-modulated 3D imaging lidar[78]. (d) Principle and results of imaging through realistic fog with a SPAD camera[82].
    Experimental results of different methods of deblurring. (a) Coded exposure that depends on the speed of an object’s motion[86]. Column 1: input images. Column 2: matching metric versus velocity. Column 3: deblurred results using our estimated velocity. (b) Comparison of the deblurring performance with different sequence lengths under the same exposure[88]. (b1) Sequence length = 40, 1 chop duration = 3 ms. (b2) Sequence length = 120, 1 chop duration = 1 ms.
    TDOCT structures and metasurface-based bijective illumination collection imaging (BICI). (a) Simplified block diagram of the TDOCT method[103]. (b) Incorporation of BICI through one arm of an interferometer (orange lines represent a single-mode fiber)[102]. (c) Tissue imaging comparison of BICI and a conventional approach[102]. Imaging swine tracheobronchial tissue specimens using a plano-convex lens with common illumination and collection paths (c1, c2, c5, and c6) and BICI (c3, c4, c7, and c8). (c9) Corresponding histology image of the tissue imaged using the conventional approach.
    (a) Several examples of scattering imaging using ballistic light[117,125,141]. (b) Several examples of computational light field restoration techniques based on scattered light[146,152,161]. (c) Non-line-of-sight imaging (NLOS)[169].
    (a)–(d) Imaging results of single-photon LiDAR at 8.2 km[117]. (e) Schematic diagram of the experimental setup[118]. (f) Ideal analogue resolution charts[117]. (g) Simulation under low-light and low-brightness conditions[117].
    (a) Aerial view of the remote active imaging experiment. (b) Results obtained based on different imaging algorithms. (c) Long-range 3D imaging over 45 km[119].
    (a) Haze imaging model[120]. (b) Flow chart of the DCP dehazing algorithm[121]. (c) Comparison of the effects of other dehazing algorithms and the Lu Z dark channel dehazing algorithm[125].
    (a) Principle of polarization difference[127]. (b) Differences between conventional imaging and polarization differential imaging[128]. (c) A is the imaging effect of the traditional TYO model, and B is the imaging effect of active linearly polarized illumination[129]. In (d)[130], (d1) is a polarization image, (d2) is a polarization angle image, (d3) is an imaging effect of traditional polarization differential imaging, and (d4) is an imaging effect of adaptive polarization differential imaging.
    (a) Schematic diagram of the atmospheric scattering model[132,133]. In (b), A and B are the best and worst polarization images, respectively, and C is the effect of dehazing using the Y. Y. Schechner method[134,135]. In (c), A is the original intensity image under dense haze conditions, and B is the rendering after multi-scale polarization imaging trans-haze algorithm[136]. (d) Comparison before underwater scattering imaging[137].
    (a) Principle of polarization imaging based on Stokes[138]. In (b)[139], b1 is the imaging effect based on Stokes vector interpolation, and b2 is the imaging effect of traditional differential imaging. In (c)[140], (c1) is the original polarization image, and (c2) is the rendering of the polarization dehazing method based on the polarization angle distribution analysis. In (d)[141], 1 is the original intensity image, 2 is the reconstructed target image, and 3 is the estimation of backscattered light.
    (a) Principle of polarization difference based on the Mueller matrix[143]. In (b)[143], (b1), (b2), and (b3) are the intensity images of three targets in a highly concentrated scattering medium, (b4), (b5), and (b6) are descatter images of three targets under the worst linearly polarized light illumination, and (b7), (b8), and (b9) are the descattering images of three targets under optimal linearly polarized light illumination. In (c)[144], (c1) is the intensity image, (c2) is the image recovered with the proposed descattering method, and (c3) and (c4) are magnified views of the region of interest marked with a red rectangle in (c1) and (c2).
    (a) Edrei et al. demonstrated a new microscopy technique that utilized the optical memory effect (OME)[145]. (b) Xie et al. described the relationship between the PSFs of thin scattering media at different reference points[146].
    (a) Super-resolution imaging through scattering media with SOSLI in comparison to other imaging techniques. (b) Principle and simulation results of SOSLI. (c) Experimental results of imaging through a ground glass diffuser with different techniques. (d) Experimental demonstration of three techniques for imaging several complex objects hidden behind a ground glass diffuser[147].
    (a) Single-shot speckle correlation[151]. (b) 3D imaging using a diffusing medium via spatial speckle intensity cross-covariance[152]. (c) Superposed reconstruction to enlarge the limited FOV[153]. (d) Pipeline for multitarget imaging through scattering media regardless of OME[154].
    (a) Wavefront shaping technology[160]. (a1) Experimental setup. (a2) System with a layer of airbrush paint present and unmodified incident wavefront. (a3) The wave was shaped to achieve constructive interference in the target. (b) Spatiotemporal focusing by optimizing a two-photon fluorescence (2PF) signal[156]. (b1) Experimental setup. (b2) 2PF images before optimization at the optimized plane (x–y). (b3) 2PF images after optimization at the optimized plane (x–y). (c) scattered light fluorescence microscopy[157]. (c1) Experimental setup. (c2) Seen through the scattering layer with a wide-field fluorescence microscope. (c3) Seen through the scattering layer with an SLM microscope.
    (a) Measuring transmission matrix in the spatial domain[161]. (a1) Experimental setup. (a2) Initial grayscale image. (a3) Reconstructed image using scattered input. (b) Measuring the transmission matrix in the spatial domain[162]. (b1) Experimental setup. (b2) Pattern before inserting the scattering medium. (b3) Reconstructed image using scattered input.
    (a) NLOS imaging based on a streak camera[163]. (a1) The process of capturing photons. (a2) An example of streak images sequentially collected. (a3) The 2D projected view of the hidden object. (b) NLOS imaging based on SPAD[164]. (b1) Experimental setup. (b2) Objects in the scene to be reconstructed. (b3) Reconstruction of the letter T. (c) NLOS imaging based on ToF[165]. (c1) Experimental setup. (c2) Unknown object. (c3) Reconstructed depth (volume as probability). (c4) Reconstructed depth (strongest peak).
    (a) Shape recovery from coherence measurements[166]. (a1) Experimental setup. (a2), (a3) Plots of real and imaginary components of SCF measured for the square and equilateral triangle objects, respectively. (b) NLOS imaging based on multimodal data fusion[167]. (b1) Experimental scene. (b2) The intensity sample. (b3) The reconstruction using this intensity sample alone. (b4) The additional measurement of scattered coherence. (b5) The reconstruction when both the intensity and coherence measurements are used.
    In (a)[169], (a1) is the example scenario, (a2) shows the recovered light fields for a simulated scene and different occluders, and (a3) is the recovered light field of another simulated scene. In (b)[170], (b1) is the experimental setup for computational periscopy, (b2) is the reconstruction procedure, and (b3) shows the reconstructions of different hidden scenes. In (c)[171], (c1) is the model of the scenario, and (c2) shows the still frames from reconstructed videos under a variety of different experimental settings.
    In (a)[172], (a1) is the geometry used in the Cook–Torrance model, (a2) is the experimental setup with a monitor illuminating diffuse surface and DSLR camera with a polarizing filter imaging illuminated region on a wall, and (a3) shows two similar monitor images with largely different effects on NLOS imaging. (b) NLOS imaging/enhanced system based on polarization information[173].
    (a) Corner setup. (b) Comparing HOG features in the raw frames and the denoised frames. (c) Reconstruction algorithm for 2D shape recovery and 3D localization. (d) A general diagram of the experiments. (e) A schematic diagram of the speckle correlation imaging setup with a monochromatic, pseudothermal source object in an around-the-corner geometry. (f) Image recovery under the pseudothermal setup. (g) A comparison of results under line-of-sight and NLOS conditions using the setup[174,175].
    General framework for computing optical systems. (a) Metalens. The use of metalenses can meet the needs of miniaturization and integration of optical systems[332]. (b) Simplified optical system. The simplified optical system seeks to achieve optimal performance of the entire system[214]. (c) Adaptive optical systems. The adaptive optical imaging system is designed to eliminate the interference of complex environments on the amplitude and phase of the imaging light field[334]. (d) Coded aperture. The introduction of coded aperture improves the dimension of information collected, making it possible to create super-resolution imaging and high-speed imaging[210]. (e) Single pixel imaging. Only a single pixel detector is used for spatial imaging. The advantages are high SNR and low cost[248]. (f) Wide area optical system. Wide area optical systems can achieve both large FOV and high resolution[189].
    Three basic phase control methods of metalenses. (a) Resonance phase control[177]. (b) Propagation phase control[178]. (c) Geometric phase control[179].
    The operational status of the hyperspectral imaging device[186]. (a) Schematic diagram of the structure of the device. (b) Schematic diagram of the basic modulation unit, including, from top to bottom, the metasurface, microlens (used to increase quantum efficiency), and CMOS image sensor. (c) Snapshot of spectral imaging. The light from the object to be imaged is incident on the metasurface superunit. (d) Hyperspectral imaging chip with reconfigurable metasurface superunits placed on top of the camera.
    Mueller matrix imaging reflection results[187]. (a) Imaging of the Mueller matrix placed in the “Fourier plane” using a 4f imaging system, conjugated with two metasurfaces. Metasurface 1 generates structured polarized light illuminating the object, while metasurface 2 diffracts and analyzes the resulting field imaged onto the CMOS sensor. The aperture is placed in the Fourier domain to limit the FOV, and the zero-order block is placed to prevent sensor saturation. (b) Chrysina gloriosa, commonly known as the “chirality beetle,” illuminated by right-circularly polarized (RCP) and left-circularly polarized (LCP) lights and imaged with a standard digital camera. (c) Original image of the chiral beetle captured using the compact Mueller matrix imaging system. (d) Full Stokes image derived from the original image. (e) Mueller matrix image obtained from full Stokes image using a no-reference method (demodulation and normalization).
    Wide-field optical imaging method. (a) Single-lens scanning imaging system. MeadeLX200 stent and its imaging effects[188]. (b) Multi-scale computational optical imaging system and imaging renderings[189].
    Wide-field optical imaging method. (a) Multi-detector splicing system[191,192,194]. (a1) UltraCam-D(UCD) camera detector splicing scheme. (a2) Complete focal plane array assembly of the Kepler telescope. (a3) ARGUS-IS imaging system. (a4) Full FOV image. (b) Multi-aperture imaging system prototype and imaging effect[198].
    (a) Coded aperture mask used in gamma-ray imaging. (b) Comparison of traditional sampling and coded exposure sampling[204]. (c) Wavefront coding imaging system[210]. (d) Schematic diagram of the coded aperture snapshot spectral imaging (CASSI) physical system[205]. (e) Schematic diagram of the CASSI imaging process[205].
    (a) Single-lens camera input image and deblurring results[211]. (b) Joint end-to-end optimization of the optical design framework[214]. (c) Diagram of the principle of the diffractive telescope imaging system experimental platform and comparison of the results with and without the image recovery point target[220].
    Imaging results and image quality evaluation of cooke triplet and doublet lens and optical systems based on deep learning combined with wavefront encoding[223]. (a) Optical structure models of the three optical systems. (b)–(f) the imaging results of different systems at defocus distances of −0.2, −0.1, 0, 0.1, and 0.2 mm, respectively. (g) Structural similarity (SSIM) values of different systems within the defocus range. (h) Peak signal-to-noise ratio (PSNR) values for different systems within the defocus range.
    Adaptive optics using direct wavefront sensing[334]. (a) The distortion of the wavefront (blue lines) is directly measured with a wavefront sensor and minimized by a wavefront modulator (e.g., a deformable mirror) to improve the image quality of a telescope. Sgr A*, Sagittarius A*. (b) Beads inside a Drosophila embryo17. (c) Neurons in zebrafish larval brain22 obtained without and with AO correction.
    (a) Schematic diagram of the light field camera structure[235]. (b) All-light images and detailed information[239]. (c) Optical model of the light field microscope[240].
    (a) Frame diagram of the compressed imaging (CI) camera and its imaging results[242]. (b) Hyperspectral “ghost imaging” camera experimental originals and experimental results[248].
    (a), (b) Schematic diagrams of the two experimental setups. (c) Schematic diagram of generating a composite light pattern (64×64) in Experiment A. (d) Schematic diagram of generating a composite color illumination pattern (64×64) in Experiment B[253].
    Experimental setup and results of photoacoustic imaging[255]. (a) Experimental setup diagram. (b) Experimental phantom for photoacoustic imaging—distorted black polymer ribbon. (c) The z-y slice images of the polymer ribbon.
    (a) Single-pixel 3D imaging system. (b) Illumination laser pulses backscattered from the scene are measured as (c) broadened signals. (d) Image cubes containing images of different depths are obtained using measurement signals. (e) Each lateral position has an intensity distribution along the vertical axis, indicating depth information. (f) Reflectance and (g) depth maps can be estimated from the image cube and then used to reconstruct a 3D image of the (h) scene[256].
    (a) Non-uniform detector[260]. (b) Curved surface detector[264]. (c) Multidimensional physical quantity detector[335]. (d) Ultra-high-speed detector[336,337].
    (a) Empirical phase transition graph of non-uniform wavelet bandpass sampling (NUWBS) for multi-band signal acquisition compared to the theoretical τ1-norm phase transition for a Gaussian measurement ensemble (shown with the dashed purple line)[260]. (b) Nyquist real-time sampling and hybrid sampling[262]. (c) The 3D detection results of left and right images and the corresponding results in bird view are shown[263].
    (a) Schematic of the human visual system. (b) The human eye and (c) the retina. (d) Schematic of our eyes’ imaging system. (e) The working mechanism of eyes. (f) Perovskite nanowires and their crystal structures[265]. (g) Schematic of the test bench used for characterization of the curved digital X-ray detector, showing the X-ray source, bone phantom, and curved digital X-ray detector[264]. (h) Imaging results acquired by the adaptive imager for objects at different distances[266].
    (a) The schematics of metasurface enabled quantum edge detection. (b) The switch state ON or OFF of the heralding arm. When the idler photon of the omen arm projects onto the surface |H⟩, it indicates a closed state, resulting in the capture of a solid cat. The predicted photons are projected onto the surface |V⟩, and the edge-enhanced contour cat is obtained in the ON switch state. (c) Edge-detection experiments with red and green HeNe laser sources[274].
    (a) Detection results in the multi-object detection experiment. (b) Object numbers in the multi-object detection experiment[277]. (c) Video reconstructions of high-speed physical phenomena[278]. (d) Data processing flow. (e) The event denoising results of the dataset overlaid on the corresponding image[280].
    Computational processing. (a) Image fusion [283,286,288]. (b) Computational image enhancement[292,297,302,309]. (c) Super-resolution reconstruction[317,318].
    MFF-GAN[283]. (a) Overall fusion framework. (b) Illustration of the decision block. (c) Network architecture of the discriminator. (d) Network architecture of the generator.
    PIAFusion network[286]. (a) The framework of PIAFusion network. (b) Visualized results of images and feature maps in the nighttime scenario. The first column shows the infrared image, visible image, and fused image, respectively. The following three columns present the feature maps corresponding to the infrared, visible, and fused images in various channel dimensions.
    MHF-net[288]. (a) and (b) are illustrations of the observation models for HrMS and LrHS images, respectively. (c) is the illustration of how to create the training data when HrHS images are unavailable. (d) is the illustration of the blind MH/HS fusion net. (e) is the experimental results.
    Results of three methods. (a) Lee’s method[290]. (b) SICE[291]. (c) JHE[292].
    IE-CGAN[293]. (a) An overview of IE-CGAN. (b) Results of two methods.
    Imaging results of four methods. (a) RetinexDIP[298]. (b) LLNet[295]. (c) Zero-DCE[297]. (d) Lv’s method[296].
    PWGCM[300]. (a) Overview of PWGCM. (b) Visualization of gamma correction map and the results in each iteration. (c) Results of several methods.
    Rivenson’s method[302]. (a) The schematic outlines the steps in the standard (top) and virtual (bottom) staining techniques. (b) Virtual staining GAN architecture. (c), (d) Virtual staining results match Masson’s trichrome stain for lung tissue sections.
    DPE-MEF[307]. (a) The architecture of the detail enhancement module. The numbers indicate the channel amounts. (b) The architecture of the color enhancement module. The numbers indicate the channel amounts. (c) Imaging results of DPE-MEF.
    HDR-GAN[309]. (a) Illustration of the proposed framework. (b) Imaging results of HDR-GAN.
    Experimental results of Wu’s method[313]. (a) shows sample HR images including a wall image and a grape image, which are downsampled by factor 4 to get the corresponding LR images for testing. (b)–(d) show the experimental results conducted on the low-resolution image.
    Experimental result of Yang’s method[317]. (a) Low-resolution image. (b) The result of bicubic interpolation. (c) Results of the proposed method.
    DRLN[322]. (a) The detailed network architecture of DRLN. (b) Results of different methods. The key contrast parts in the red rectangle are magnified to display on the right. The LR image used for reconstruction is obtained by downsampling the HR image by a factor of 4.
    EMASRN[324]. (a) An overview of the EMASRN network. (b) Results of different methods. The key contrast parts in the red rectangle are magnified to display on the right. The LR image used for reconstruction is obtained by downsampling the HR image by a factor of 4.
    Experimental comparisons of differential ghost imaging (DGI)[327]. GISC (GI using sparsity constraint) and GIDC in terms of both the sampling ratio and reconstruction SNR. (a) Schematic diagram of the experimental setup. (b) Experimental results for binary objects. (c) Experimental results for a grayscale object. (d) Experimental results on a flying drone.
    Tools

    Get Citation

    Copy Citation Text

    Jinpeng Liu, Yi Feng, Yuzhi Wang, Juncheng Liu, Feiyan Zhou, Wenguang Xiang, Yuhan Zhang, Haodong Yang, Chang Cai, Fei Liu, Xiaopeng Shao, "Future-proof imaging: computational imaging," Adv. Imaging 1, 012001 (2024)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Review Article

    Received: May. 19, 2024

    Accepted: Jun. 20, 2024

    Published Online: Jul. 17, 2024

    The Author Email: Fei Liu (feiliu@xidian.edu.cn), Xiaopeng Shao (xpshao@opt.ac.cn)

    DOI:10.3788/AI.2024.20003

    Topics